Search Results: "irl"

25 October 2023

Phil Hands: Sleep Apnoea

I just noticed that I wrote this a decade ago, and then never got round to posting it, so thought I might kick it off now to mark my tentative return to blogging. At the recent 2015 Cambridge-UK Mini-DebConf (generously hosted by ARM), I gave an impromptu Lightning Talk about Sleep Apnoea (video here). Obstructive Sleep Apnoea (OSA - the form I'm on about) is a sleep disorder where one repeatedly stops breathing while asleep, normally when snoring, but not necessarily. The consequence of this is that in order to resume breathing one must wake up momentarily. These events are not remembered, but they ruin the quality of your sleep. If you find that you're often quite tired, you should probably give the Epworth Sleepiness Scale a try -- if it suggests you have a problem: Get thee to a doctor for a check-up! The good news is that if you do turn out to have OSA it's fairly easy to treat (CPAP or more recently APAP being the favoured treatment), and that when treated you should be able to get good quality sleep that will result in you being much more awake, and much more cheerful. If you might be an Apnoeac (or a sufferer of some other sleep disorder, for that matter), get yourself treated, and you'll be able to use the extra hours of daily concentration working on Debian, thus making the world a better place :-)

Russ Allbery: Review: Going Infinite

Review: Going Infinite, by Michael Lewis
Publisher: W.W. Norton & Company
Copyright: 2023
ISBN: 1-324-07434-5
Format: Kindle
Pages: 255
My first reaction when I heard that Michael Lewis had been embedded with Sam Bankman-Fried working on a book when Bankman-Fried's cryptocurrency exchange FTX collapsed into bankruptcy after losing billions of dollars of customer deposits was "holy shit, why would you talk to Michael Lewis about your dodgy cryptocurrency company?" Followed immediately by "I have to read this book." This is that book. I wasn't sure how Lewis would approach this topic. His normal (although not exclusive) area of interest is financial systems and crises, and there is lots of room for multiple books about cryptocurrency fiascoes using someone like Bankman-Fried as a pivot. But Going Infinite is not like The Big Short or Lewis's other financial industry books. It's a nearly straight biography of Sam Bankman-Fried, with just enough context for the reader to follow his life. To understand what you're getting in Going Infinite, I think it's important to understand what sort of book Lewis likes to write. Lewis is not exactly a reporter, although he does explain complicated things for a mass audience. He's primarily a storyteller who collects people he finds fascinating. This book was therefore never going to be like, say, Carreyrou's Bad Blood or Isaac's Super Pumped. Lewis's interest is not in a forensic account of how FTX or Alameda Research were structured. His interest is in what makes Sam Bankman-Fried tick, what's going on inside his head. That's not a question Lewis directly answers, though. Instead, he shows you Bankman-Fried as Lewis saw him and was able to reconstruct from interviews and sources and lets you draw your own conclusions. Boy did I ever draw a lot of conclusions, most of which were highly unflattering. However, one conclusion I didn't draw, and had been dubious about even before reading this book, was that Sam Bankman-Fried was some sort of criminal mastermind who intentionally plotted to steal customer money. Lewis clearly doesn't believe this is the case, and with the caveat that my study of the evidence outside of this book has been spotty and intermittent, I think Lewis has the better of the argument. I am utterly fascinated by this, and I'm afraid this review is going to turn into a long summary of my take on the argument, so here's the capsule review before you get bored and wander off: This is a highly entertaining book written by an excellent storyteller. I am also inclined to believe most of it is true, but given that I'm not on the jury, I'm not that invested in whether Lewis is too credulous towards the explanations of the people involved. What I do know is that it's a fantastic yarn with characters who are too wild to put in fiction, and I thoroughly enjoyed it. There are a few things that everyone involved appears to agree on, and therefore I think we can take as settled. One is that Bankman-Fried, and most of the rest of FTX and Alameda Research, never clearly distinguished between customer money and all of the other money. It's not obvious that their home-grown accounting software (written entirely by one person! who never spoke to other people! in code that no one else could understand!) was even capable of clearly delineating between their piles of money. Another is that FTX and Alameda Research were thoroughly intermingled. There was no official reporting structure and possibly not even a coherent list of employees. The environment was so chaotic that lots of people, including Bankman-Fried, could have stolen millions of dollars without anyone noticing. But it was also so chaotic that they could, and did, literally misplace millions of dollars by accident, or because Bankman-Fried had problems with object permanence. Something that was previously less obvious from news coverage but that comes through very clearly in this book is that Bankman-Fried seriously struggled with normal interpersonal and societal interactions. We know from multiple sources that he was diagnosed with ADHD and depression (Lewis describes it specifically as anhedonia, the inability to feel pleasure). The ADHD in Lewis's account is quite severe and does not sound controlled, despite medication; for example, Bankman-Fried routinely played timed video games while he was having important meetings, forgot things the moment he stopped dealing with them, was constantly on his phone or seeking out some other distraction, and often stimmed (by bouncing his leg) to a degree that other people found it distracting. Perhaps more tellingly, Bankman-Fried repeatedly describes himself in diary entries and correspondence to other people (particularly Caroline Ellison, his employee and on-and-off secret girlfriend) as being devoid of empathy and unable to access his own emotions, which Lewis supports with stories from former co-workers. I'm very hesitant to diagnose someone via a book, but, at least in Lewis's account, Bankman-Fried nearly walks down the symptom list of antisocial personality disorder in his own description of himself to other people. (The one exception is around physical violence; there is nothing in this book or in any of the other reporting that I've seen to indicate that Bankman-Fried was violent or physically abusive.) One of the recurrent themes of this book is that Bankman-Fried never saw the point in following rules that didn't make sense to him or worrying about things he thought weren't important, and therefore simply didn't. By about a third of the way into this book, before FTX is even properly started, very little about its eventual downfall will seem that surprising. There was no way that Sam Bankman-Fried was going to be able to run a successful business over time. He was extremely good at probabilistic trading and spotting exploitable market inefficiencies, and extremely bad at essentially every other aspect of living in a society with other people, other than a hit-or-miss ability to charm that worked much better with large audiences than one-on-one. The real question was why anyone would ever entrust this man with millions of dollars or decide to work for him for longer than two weeks. The answer to those questions changes over the course of this story. Later on, it was timing. Sam Bankman-Fried took the techniques of high frequency trading he learned at Jane Street Capital and applied them to exploiting cryptocurrency markets at precisely the right time in the cryptocurrency bubble. There was far more money than sense, the most ruthless financial players were still too leery to get involved, and a rising tide was lifting all boats, even the ones that were piles of driftwood. When cryptocurrency inevitably collapsed, so did his businesses. In retrospect, that seems inevitable. The early answer, though, was effective altruism. A full discussion of effective altruism is beyond the scope of this review, although Lewis offers a decent introduction in the book. The short version is that a sensible and defensible desire to use stronger standards of evidence in evaluating charitable giving turned into a bizarre navel-gazing exercise in making up statistical risks to hypothetical future people and treating those made-up numbers as if they should be the bedrock of one's personal ethics. One of the people most responsible for this turn is an Oxford philosopher named Will MacAskill. Sam Bankman-Fried was already obsessed with utilitarianism, in part due to his parents' philosophical beliefs, and it was a presentation by Will MacAskill that converted him to the effective altruism variant of extreme utilitarianism. In Lewis's presentation, this was like joining a cult. The impression I came away with feels like something out of a science fiction novel: Bankman-Fried knew there was some serious gap in his thought processes where most people had empathy, was deeply troubled by this, and latched on to effective altruism as the ethical framework to plug into that hole. So much of effective altruism sounds like a con game that it's easy to think the participants are lying, but Lewis clearly believes Bankman-Fried is a true believer. He appeared to be sincerely trying to make money in order to use it to solve existential threats to society, he does not appear to be motivated by money apart from that goal, and he was following through (in bizarre and mostly ineffective ways). I find this particularly believable because effective altruism as a belief system seems designed to fit Bankman-Fried's personality and justify the things he wanted to do anyway. Effective altruism says that empathy is meaningless, emotion is meaningless, and ethical decisions should be made solely on the basis of expected value: how much return (usually in safety) does society get for your investment. Effective altruism says that all the things that Sam Bankman-Fried was bad at were useless and unimportant, so he could stop feeling bad about his apparent lack of normal human morality. The only thing that mattered was the thing that he was exceptionally good at: probabilistic reasoning under uncertainty. And, critically to the foundation of his business career, effective altruism gave him access to investors and a recruiting pool of employees, things he was entirely unsuited to acquiring the normal way. There's a ton more of this book that I haven't touched on, but this review is already quite long, so I'll leave you with one more point. I don't know how true Lewis's portrayal is in all the details. He took the approach of getting very close to most of the major players in this drama and largely believing what they said happened, supplemented by startling access to sources like Bankman-Fried's personal diary and Caroline Ellis's personal diary. (He also seems to have gotten extensive information from the personal psychiatrist of most of the people involved; I'm not sure if there's some reasonable explanation for this, but based solely on the material in this book, it seems to be a shocking breach of medical ethics.) But Lewis is a storyteller more than he's a reporter, and his bias is for telling a great story. It's entirely possible that the events related here are not entirely true, or are skewed in favor of making a better story. It's certainly true that they're not the complete story. But, that said, I think a book like this is a useful counterweight to the human tendency to believe in moral villains. This is, frustratingly, a counterweight extended almost exclusively to higher-class white people like Bankman-Fried. This is infuriating, but that doesn't make it wrong. It means we should extend that analysis to more people. Once FTX collapsed, a lot of people became very invested in the idea that Bankman-Fried was a straightforward embezzler. Either he intended from the start to steal everyone's money or, more likely, he started losing money, panicked, and stole customer money to cover the hole. Lots of people in history have done exactly that, and lots of people involved in cryptocurrency have tenuous attachments to ethics, so this is a believable story. But people are complicated, and there's also truth in the maxim that every villain is the hero of their own story. Lewis is after a less boring story than "the crook stole everyone's money," and that leads to some bias. But sometimes the less boring story is also true. Here's the thing: even if Sam Bankman-Fried never intended to take any money, he clearly did intend to mix customer money with Alameda Research funds. In Lewis's account, he never truly believed in them as separate things. He didn't care about following accounting or reporting rules; he thought they were boring nonsense that got in his way. There is obvious criminal intent here in any reading of the story, so I don't think Lewis's more complex story would let him escape prosecution. He refused to follow the rules, and as a result a lot of people lost a lot of money. I think it's a useful exercise to leave mental space for the possibility that he had far less obvious reasons for those actions than that he was a simple thief, while still enforcing the laws that he quite obviously violated. This book was great. If you like Lewis's style, this was some of the best entertainment I've read in a while. Highly recommended; if you are at all interested in this saga, I think this is a must-read. Rating: 9 out of 10

22 October 2023

Ian Jackson: DigiSpark (ATTiny85) - Arduino, C, Rust, build systems

Recently I completed a small project, including an embedded microcontroller. For me, using the popular Arduino IDE, and C, was a mistake. The experience with Rust was better, but still very exciting, and not in a good way. Here follows the rant. Introduction In a recent project (I ll write about the purpose, and the hardware in another post) I chose to use a DigiSpark board. This is a small board with a USB-A tongue (but not a proper plug), and an ATTiny85 microcontroller, This chip has 8 pins and is quite small really, but it was plenty for my application. By choosing something popular, I hoped for convenient hardware, and an uncomplicated experience. Convenient hardware, I got. Arduino IDE The usual way to program these boards is via an IDE. I thought I d go with the flow and try that. I knew these were closely related to actual Arduinos and saw that the IDE package arduino was in Debian. But it turns out that the Debian package s version doesn t support the DigiSpark. (AFAICT from the list it offered me, I m not sure it supports any ATTiny85 board.) Also, disturbingly, its board manager seemed to be offering to install board support, suggesting it would download stuff from the internet and run it. That wouldn t be acceptable for my main laptop. I didn t expect to be doing much programming or debugging, and the project didn t have significant security requirements: the chip, in my circuit, has only a very narrow ability do anything to the real world, and no network connection of any kind. So I thought it would be tolerable to do the project on my low-security video laptop . That s the machine where I m prepared to say yes to installing random software off the internet. So I went to the upstream Arduino site and downloaded a tarball containing the Arduino IDE. After unpacking that in /opt it ran and produced a pointy-clicky IDE, as expected. I had already found a 3rd-party tutorial saying I needed to add a magic URL (from the DigiSpark s vendor) in the preferences. That indeed allowed it to download a whole pile of stuff. Compilers, bootloader clients, god knows what. However, my tiny test program didn t make it to the board. Half-buried in a too-small window was an error message about the board s bootloader ( Micronucleus ) being too new. The boards I had came pre-flashed with micronucleus 2.2. Which is hardly new, But even so the official Arduino IDE (or maybe the DigiSpark s board package?) still contains an old version. So now we have all the downsides of curl bash-ware, but we re lacking the it s up to date and it just works upsides. Further digging found some random forum posts which suggested simply downloading a newer micronucleus and manually stuffing it into the right place: one overwrites a specific file, in the middle the heaps of stuff that the Arduino IDE s board support downloader squirrels away in your home directory. (In my case, the home directory of the untrusted shared user on the video laptop,) So, whatever . I did that. And it worked! Having demo d my ability to run code on the board, I set about writing my program. Writing C again The programming language offered via the Arduino IDE is C. It s been a little while since I started a new thing in C. After having spent so much of the last several years writing Rust. C s primitiveness quickly started to grate, and the program couldn t easily be as DRY as I wanted (Don t Repeat Yourself, see Wilson et al, 2012, 4, p.6). But, I carried on; after all, this was going to be quite a small job. Soon enough I had a program that looked right and compiled. Before testing it in circuit, I wanted to do some QA. So I wrote a simulator harness that #included my Arduino source file, and provided imitations of the few Arduino library calls my program used. As an side advantage, I could build and run the simulation on my main machine, in my normal development environment (Emacs, make, etc.). The simulator runs confirmed the correct behaviour. (Perhaps there would have been some more faithful simulation tool, but the Arduino IDE didn t seem to offer it, and I wasn t inclined to go further down that kind of path.) So I got the video laptop out, and used the Arduino IDE to flash the program. It didn t run properly. It hung almost immediately. Some very ad-hoc debugging via led-blinking (like printf debugging, only much worse) convinced me that my problem was as follows: Arduino C has 16-bit ints. My test harness was on my 64-bit Linux machine. C was autoconverting things (when building for the micrcocontroller). The way the Arduino IDE ran the compiler didn t pass the warning options necessary to spot narrowing implicit conversions. Those warnings aren t the default in C in general because C compilers hate us all for compatibility reasons. I don t know why those warnings are not the default in the Arduino IDE, but my guess is that they didn t want to bother poor novice programmers with messages from the compiler explaining how their program is quite possibly wrong. After all, users don t like error messages so we shouldn t report errors. And novice programmers are especially fazed by error messages so it s better to just let them struggle themselves with the arcane mysteries of undefined behaviour in C? The Arduino IDE does offer a dropdown for compiler warnings . The default is None. Setting it to All didn t produce anything about my integer overflow bugs. And, the output was very hard to find anyway because the log window has a constant stream of strange messages from javax.jmdns, with hex DNS packet dumps. WTF. Other things that were vexing about the Arduino IDE: it has fairly fixed notions (which don t seem to be documented) about how your files and directories ought to be laid out, and magical machinery for finding things you put nearby its sketch (as it calls them) and sticking them in its ear, causing lossage. It has a tendency to become confused if you edit files under its feet (e.g. with git checkout). It wasn t really very suited to a workflow where principal development occurs elsewhere. And, important settings such as the project s clock speed, or even the target board, or the compiler warning settings to use weren t stored in the project directory along with the actual code. I didn t look too hard, but I presume they must be in a dotfile somewhere. This is madness. Apparently there is an Arduino CLI too. But I was already quite exasperated, and I didn t like the idea of going so far off the beaten path, when the whole point of using all this was to stay with popular tooling and share fate with others. (How do these others cope? I have no idea.) As for the integer overflow bug: I didn t seriously consider trying to figure out how to control in detail the C compiler options passed by the Arduino IDE. (Perhaps this is possible, but not really documented?) I did consider trying to run a cross-compiler myself from the command line, with appropriate warning options, but that would have involved providing (or stubbing, again) the Arduino/DigiSpark libraries (and bugs could easily lurk at that interface). Instead, I thought, if only I had written the thing in Rust . But that wasn t possible, was it? Does Rust even support this board? Rust on the DigiSpark I did a cursory web search and found a very useful blog post by Dylan Garrett. This encouraged me to think it might be a workable strategy. I looked at the instructions there. It seemed like I could run them via the privsep arrangement I use to protect myself when developing using upstream cargo packages from crates.io. I got surprisingly far surprisingly quickly. It did, rather startlingly, cause my rustup to download a random recent Nightly Rust, but I have six of those already for other Reasons. Very quickly I got the trinket LED blink example, referenced by Dylan s blog post, to compile. Manually copying the file to the video laptop allowed me to run the previously-downloaded micronucleus executable and successfully run the blink example on my board! I thought a more principled approach to the bootloader client might allow a more convenient workflow. I found the upstream Micronucleus git releases and tags, and had a look over its source code, release dates, etc. It seemed plausible, so I compiled v2.6 from source. That was a success: now I could build and install a Rust program onto my board, from the command line, on my main machine. No more pratting about with the video laptop. I had got further, more quickly, with Rust, than with the Arduino IDE, and the outcome and workflow was superior. So, basking in my success, I copied the directory containing the example into my own project, renamed it, and adjusted the path references. That didn t work. Now it didn t build. Even after I copied about .cargo/config.toml and rust-toolchain.toml it didn t build, producing a variety of exciting messages, depending what precisely I tried. I don t have detailed logs of my flailing: the instructions say to build it by cd ing to the subdirectory, and, given that what I was trying to do was to not follow those instructions, it didn t seem sensible to try to prepare a proper repro so I could file a ticket. I wasn t optimistic about investigating it more deeply myself: I have some experience of fighting cargo, and it s not usually fun. Looking at some of the build control files, things seemed quite complicated. Additionally, not all of the crates are on crates.io. I have no idea why not. So, I would need to supply local copies of them anyway. I decided to just git subtree add the avr-hal git tree. (That seemed better than the approach taken by the avr-hal project s cargo template, since that template involve a cargo dependency on a foreign git repository. Perhaps it would be possible to turn them into path dependencies, but given that I had evidence of file-location-sensitive behaviour, which I didn t feel like I wanted to spend time investigating, using that seems like it would possibly have invited more trouble. Also, I don t like package templates very much. They re a form of clone-and-hack: you end up stuck with whatever bugs or oddities exist in the version of the template which was current when you started.) Since I couldn t get things to build outside avr-hal, I edited the example, within avr-hal, to refer to my (one) program.rs file outside avr-hal, with a #[path] instruction. That s not pretty, but it worked. I also had to write a nasty shell script to work around the lack of good support in my nailing-cargo privsep tool for builds where cargo must be invoked in a deep subdirectory, and/or Cargo.lock isn t where it expects, and/or the target directory containing build products is in a weird place. It also has to filter the output from cargo to adjust the pathnames in the error messages. Otherwise, running both cd A; cargo build and cd B; cargo build from a Makefile produces confusing sets of error messages, some of which contain filenames relative to A and some relative to B, making it impossible for my Emacs to reliably find the right file. RIIR (Rewrite It In Rust) Having got my build tooling sorted out I could go back to my actual program. I translated the main program, and the simulator, from C to Rust, more or less line-by-line. I made the Rust version of the simulator produce the same output format as the C one. That let me check that the two programs had the same (simulated) behaviour. Which they did (after fixing a few glitches in the simulator log formatting). Emboldened, I flashed the Rust version of my program to the DigiSpark. It worked right away! RIIR had caused the bug to vanish. Of course, to rewrite the program in Rust, and get it to compile, it was necessary to be careful about the types of all the various integers, so that s not so surprising. Indeed, it was the point. I was then able to refactor the program to be a bit more natural and DRY, and improve some internal interfaces. Rust s greater power, compared to C, made those cleanups easier, so making them worthwhile. However, when doing real-world testing I found a weird problem: my timings were off. Measured, the real program was too fast by a factor of slightly more than 2. A bit of searching (and searching my memory) revealed the cause: I was using a board template for an Adafruit Trinket. The Trinket has a clock speed of 8MHz. But the DigiSpark runs at 16.5MHz. (This is discussed in a ticket against one of the C/C++ libraries supporting the ATTiny85 chip.) The Arduino IDE had offered me a choice of clock speeds. I have no idea how that dropdown menu took effect; I suspect it was adding prelude code to adjust the clock prescaler. But my attempts to mess with the CPU clock prescaler register by hand at the start of my Rust program didn t bear fruit. So instead, I adopted a bodge: since my code has (for code structure reasons, amongst others) only one place where it dealt with the underlying hardware s notion of time, I simply changed my delay function to adjust the passed-in delay values, compensating for the wrong clock speed. There was probably a more principled way. For example I could have (re)based my work on either of the two unmerged open MRs which added proper support for the DigiSpark board, rather than abusing the Adafruit Trinket definition. But, having a nearly-working setup, and an explanation for the behaviour, I preferred the narrower fix to reopening any cans of worms. An offer of help As will be obvious from this posting, I m not an expert in dev tools for embedded systems. Far from it. This area seems like quite a deep swamp, and I m probably not the person to help drain it. (Frankly, much of the improvement work ought to be done, and paid for, by hardware vendors.) But, as a full Member of the Debian Project, I have considerable gatekeeping authority there. I also have much experience of software packaging, build systems, and release management. If anyone wants to try to improve the situation with embedded tooling in Debian, and is willing to do the actual packaging work. I would be happy to advise, and to review and sponsor your contributions. An obvious candidate: it seems to me that micronucleus could easily be in Debian. Possibly a DigiSpark board definition could be provided to go with the arduino package. Unfortunately, IMO Debian s Rust packaging tooling and workflows are very poor, and the first of my suggestions for improvement wasn t well received. So if you need help with improving Rust packages in Debian, please talk to the Debian Rust Team yourself. Conclusions Embedded programming is still rather a mess and probably always will be. Embedded build systems can be bizarre. Documentation is scant. You re often expected to download board support packages full of mystery binaries, from the board vendor (or others). Dev tooling is maddening, especially if aimed at novice programmers. You want version control? Hermetic tracking of your project s build and install configuration? Actually to be told by the compiler when you write obvious bugs? You re way off the beaten track. As ever, Free Software is under-resourced and the maintainers are often busy, or (reasonably) have other things to do with their lives. All is not lost Rust can be a significantly better bet than C for embedded software: The Rust compiler will catch a good proportion of programming errors, and an experienced Rust programmer can arrange (by suitable internal architecture) to catch nearly all of them. When writing for a chip in the middle of some circuit, where debugging involves staring an LED or a multimeter, that s precisely what you want. Rust embedded dev tooling was, in this case, considerably better. Still quite chaotic and strange, and less mature, perhaps. But: significantly fewer mystery downloads, and significantly less crazy deviations from the language s normal build system. Overall, less bad software supply chain integrity. The ATTiny85 chip, and the DigiSpark board, served my hardware needs very well. (More about the hardware aspects of this project in a future posting.)

comment count unavailable comments

21 October 2023

Dirk Eddelbuettel: qlcal 0.0.8 on CRAN: QuantLib 1.32 Updates

The eighth release of the still fairly new qlcal package arrivied at CRAN today. qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. This release brings updates from the just-released QuantLib 1.32 version. It also avoids a nag from R during build ( only specify C++14 if you really need it ) but switching to a versioned depends on R 4.2.0 or later. This implies C++14 or later as the default. If you need qlcal on an older R, grab the sources, edit DESCRIPTION to remove this constraint and set the standard as before in src/Makevars (or src/Makevars.win).

Changes in version 0.0.8 (2023-10-21)
  • A small set of updates from QuantLib 1.32 have been applied
  • The explicit C++14 compilation standard has been replaced with an implicit one by relying on R (>= 4.2.0)

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 October 2023

Russ Allbery: Review: The Cassini Division

Review: The Cassini Division, by Ken MacLeod
Series: Fall Revolution #3
Publisher: Tor
Copyright: 1998
Printing: August 2000
ISBN: 0-8125-6858-3
Format: Mass market
Pages: 305
The Cassini Division is the third book in the Fall Revolution series and a fairly direct sequel (albeit with different protagonists) to The Stone Canal. This is not a good place to start the series. It's impossible to talk about the plot of this book without discussing the future history of this series, which arguably includes some spoilers for The Star Fraction and The Stone Canal. I don't think the direction of history matters that much in enjoying the previous books, but read the first two books of the series before this review if you want to avoid all spoilers. When the Outwarders uploaded themselves and went fast, they did a lot of strange things: an interstellar probe contrary to all known laws of physics, the disassembly of Ganymede, and the Malley Mile, which plays a significant role in The Stone Canal. They also crashed the Earth. This was not entirely their fault. There were a lot of politics, religious fundamentalism, and plagues in play as well. But the storm of viruses broadcast from their transformed Jupiter shut down essentially all computing equipment on Earth, which set off much of the chaos. The results were catastrophic, and also politically transformative. Now, the Solar Union is a nearly unified anarchosocialist society, with only scattered enclaves of non-cooperators left outside that structure. Ellen May Ngewthu is a leader of the Cassini Division, the bulwark that stands between humans and the Outwarders. The Division ruthlessly destroys any remnant or probe that dares rise out of Jupiter's atmosphere, ensuring that the Outwarders, whatever they have become after untold generations of fast evolution, stay isolated to the one planet they have absorbed. The Division is very good at what they do. But there is a potential gap in that line of defense: there are fast folk in storage at the other end of the Malley Mile, on New Mars, and who knows what the deranged capitalists there will do or what forces they might unleash. The one person who knows a path through the Malley Mile isn't talking, so Ellen goes in search of the next best thing: the non-cooperator scientist Isambard Kingdom Malley. I am now thoroughly annoyed at how politics are handled in this series, and much less confused by the frequency with which MacLeod won Prometheus Awards from the Libertarian Futurist Society. Some of this is my own fault for having too high of hopes for political SF, but nothing in this series so far has convinced me that MacLeod is seriously engaging with political systems. Instead, the world-building to date makes the classic libertarian mistake of thinking societies will happily abandon stability and predictability in favor of their strange definition of freedom. The Solar Union is based on what Ellen calls the true knowledge, which is worth quoting in full so that you know what kind of politics we're talking about:
Life is a process of breaking down and using other matter, and if need be, other life. Therefore, life is aggression, and successful life is successful aggression. Life is the scum of matter, and people are the scum of life. There is nothing but matter, forces, space and time, which together make power. Nothing matters, except what matters to you. Might makes right, and power makes freedom. You are free to do whatever is in your power, and if you want to survive and thrive you had better do whatever is in your interests. If your interests conflict with those of others, let the others pit their power against yours, everyone for theirselves. If your interests coincide with those of others, let them work together with you, and against the rest. We are what we eat, and we eat everything. All that you really value, and the goodness and truth and beauty of life, have their roots in this apparently barren soil. This is the true knowledge. We had founded our idealism on the most nihilistic implications of science, our socialism on crass self-interest, our peace on our capacity for mutual destruction, and our liberty on determinism. We had replaced morality with convention, bravery with safety, frugality with plenty, philosophy with science, stoicism with anaesthetics and piety with immortality. The universal acid of the true knowledge had burned away a world of words, and exposed a universe of things. Things we could use.
This is certainly something that some people will believe, particularly cynical college students who love political theory, feeling smarter than other people, and calling their pet theories things like "the true knowledge." It is not even remotely believable as the governing philosophy of a solar confederation. The point of government for the average person in human society is to create and enforce predictable mutual rules that one can use as a basis for planning and habits, allowing you to not think about politics all the time. People who adore thinking about politics have great difficulty understanding how important it is to everyone else to have ignorable government. Constantly testing your power against other coalitions is a sport, not a governing philosophy. Given the implication that this testing is through violence or the threat of violence, it beggars belief that any large number of people would tolerate that type of instability for an extended period of time. Ellen is fully committed to the true knowledge. MacLeod likely is not; I don't think this represents the philosophy of the author. But the primary political conflict in this novel famous for being political science fiction is between the above variation of anarchy and an anarchocapitalist society, neither of which are believable as stable political systems for large numbers of people. This is a bit like seeking out a series because you were told it was about a great clash of European monarchies and discovering it was about a fight between Liberland and Sealand. It becomes hard to take the rest of the book seriously. I do realize that one point of political science fiction is to play with strange political ideas, similar to how science fiction plays with often-implausible science ideas. But those ideas need some contact with human nature. If you're going to tell me that the key to clawing society back from a world-wide catastrophic descent into chaos is to discard literally every social system used to create predictability and order, you had better be describing aliens, because that's not how humans work. The rest of the book is better. I am untangling a lot of backstory for the above synopsis, which in the book comes in dribs and drabs, but piecing that together is good fun. The plot is far more straightforward than the previous two books in the series: there is a clear enemy, a clear goal, and Ellen goes from point A to point B in a comprehensible way with enough twists to keep it interesting. The core moral conflict of the book is that Ellen is an anti-AI fanatic to the point that she considers anyone other than non-uploaded humans to be an existential threat. MacLeod gives the reader both reasons to believe Ellen is right and reasons to believe she's wrong, which maintains an interesting moral tension. One thing that MacLeod is very good at is what Bob Shaw called "wee thinky bits." I think my favorite in this book is the computer technology used by the Cassini Division, who have spent a century in close combat with inimical AI capable of infecting any digital computer system with tailored viruses. As a result, their computers are mechanical non-Von-Neumann machines, but mechanical with all the technology of a highly-advanced 24th century civilization with nanometer-scale manufacturing technology. It's a great mental image and a lot of fun to think about. This is the only science fiction novel that I can think of that has a hard-takeoff singularity that nonetheless is successfully resisted and fought to a stand-still by unmodified humanity. Most writers who were interested in the singularity idea treated it as either a near-total transformation leaving only remnants or as something that had to be stopped before it started. MacLeod realizes that there's no reason to believe a post-singularity form of life would be either uniform in intent or free from its own baffling sudden collapses and reversals, which can be exploited by humans. It makes for a much better story. The sociology of this book is difficult to swallow, but the characterization is significantly better than the previous books of the series and the plot is much tighter. I was too annoyed by the political science to fully enjoy it, but that may be partly the fault of my expectations coming in. If you like chewy, idea-filled science fiction with a lot of unexplained world-building that you have to puzzle out as you go, you may enjoy this, although unfortunately I think you need to read at least The Stone Canal first. The ending was a bit unsatisfying, but even that includes some neat science fiction ideas. Followed by The Sky Road, although I understand it is not a straightforward sequel. Rating: 6 out of 10

12 October 2023

Jonathan McDowell: Installing Debian on the BananaPi M2 Zero

My previously mentioned C.H.I.P. repurposing has been partly successful; I ve found a use for it (which I still need to write up), but unfortunately it s too useful and the fact it s still a bit flaky has become a problem. I spent a while trying to isolate exactly what the problem is (I m still seeing occasional hard hangs with no obvious debug output in the logs or on the serial console), then realised I should just buy one of the cheap ARM SBC boards currently available. The C.H.I.P. is based on an Allwinner R8, which is a single ARM v7 core (an A8). So it s fairly low power by today s standards and it seemed pretty much any board would probably do. I considered a Pi 2 Zero, but couldn t be bothered trying to find one in stock at a reasonable price (I ve had one on backorder from CPC since May 2022, and yes, I know other places have had them in stock since but I don t need one enough to chase and I m now mostly curious about whether it will ever ship). As the title of this post gives away, I settled on a Banana Pi BPI-M2 Zero, which is based on an Allwinner H3. That s a quad-core ARM v7 (an A7), so a bit more oompfh than the C.H.I.P. All in all it set me back 25, including a set of heatsinks that form a case around it. I started with the vendor provided Debian SD card image, which is based on Debian 9 (stretch) and so somewhat old. I was able to dist-upgrade my way through buster and bullseye, and end up on bookworm. I then discovered the bookworm 6.1 kernel worked just fine out of the box, and even included a suitable DTB. Which got me thinking about whether I could do a completely fresh Debian install with minimal tweaking. First thing, a boot loader. The Allwinner chips are nice in that they ll boot off SD, so I just needed a suitable u-boot image. Rather than go with the vendor image I had a look at mainline and discovered it had support! So let s build a clean image:
noodles@buildhost:~$ mkdir ~/BPI
noodles@buildhost:~$ cd ~/BPI
noodles@buildhost:~/BPI$ ls
noodles@buildhost:~/BPI$ git clone https://source.denx.de/u-boot/u-boot.git
Cloning into 'u-boot'...
remote: Enumerating objects: 935825, done.
remote: Counting objects: 100% (5777/5777), done.
remote: Compressing objects: 100% (1967/1967), done.
remote: Total 935825 (delta 3799), reused 5716 (delta 3769), pack-reused 930048
Receiving objects: 100% (935825/935825), 186.15 MiB   2.21 MiB/s, done.
Resolving deltas: 100% (785671/785671), done.
noodles@buildhost:~/BPI$ mkdir u-boot-build
noodles@buildhost:~/BPI$ cd u-boot
noodles@buildhost:~/BPI/u-boot$ git checkout v2023.07.02
...
HEAD is now at 83cdab8b2c Prepare v2023.07.02
noodles@buildhost:~/BPI/u-boot$ make O=../u-boot-build bananapi_m2_zero_defconfig
  HOSTCC  scripts/basic/fixdep
  GEN     Makefile
  HOSTCC  scripts/kconfig/conf.o
  YACC    scripts/kconfig/zconf.tab.c
  LEX     scripts/kconfig/zconf.lex.c
  HOSTCC  scripts/kconfig/zconf.tab.o
  HOSTLD  scripts/kconfig/conf
#
# configuration written to .config
#
make[1]: Leaving directory '/home/noodles/BPI/u-boot-build'
noodles@buildhost:~/BPI/u-boot$ cd ../u-boot-build/
noodles@buildhost:~/BPI/u-boot-build$ make CROSS_COMPILE=arm-linux-gnueabihf-
  GEN     Makefile
scripts/kconfig/conf  --syncconfig Kconfig
...
  LD      spl/u-boot-spl
  OBJCOPY spl/u-boot-spl-nodtb.bin
  COPY    spl/u-boot-spl.bin
  SYM     spl/u-boot-spl.sym
  MKIMAGE spl/sunxi-spl.bin
  MKIMAGE u-boot.img
  COPY    u-boot.dtb
  MKIMAGE u-boot-dtb.img
  BINMAN  .binman_stamp
  OFCHK   .config
noodles@buildhost:~/BPI/u-boot-build$ ls -l u-boot-sunxi-with-spl.bin
-rw-r--r-- 1 noodles noodles 494900 Aug  8 08:06 u-boot-sunxi-with-spl.bin
I had the advantage here of already having a host setup to cross build armhf binaries, but this was all done on a Debian bookworm host with packages from main. I ve put my build up here in case it s useful to someone - everything else below can be done on a normal x86_64 host. Next I needed a Debian installer. I went for the netboot variant - although I was writing it to SD rather than TFTP booting I wanted as much as possible to come over the network.
noodles@buildhost:~/BPI$ wget https://deb.debian.org/debian/dists/bookworm/main/installer-armhf/20230607%2Bdeb12u1/images/netboot/netboot.tar.gz
...
2023-08-08 10:15:03 (34.5 MB/s) -  netboot.tar.gz  saved [37851404/37851404]
noodles@buildhost:~/BPI$ tar -axf netboot.tar.gz
Then I took a suitable microSD card and set it up with a 500M primary VFAT partition, leaving the rest for Linux proper. I could have got away with a smaller VFAT partition but I d initially thought I might need to put some more installation files on it.
noodles@buildhost:~/BPI$ sudo fdisk /dev/sdb
Welcome to fdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): o
Created a new DOS (MBR) disklabel with disk identifier 0x793729b3.
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (1-4, default 1):
First sector (2048-60440575, default 2048):
Last sector, +/-sectors or +/-size K,M,G,T,P  (2048-60440575, default 60440575): +500M
Created a new partition 1 of type 'Linux' and of size 500 MiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): c
Changed type of partition 'Linux' to 'W95 FAT32 (LBA)'.
Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (2-4, default 2):
First sector (1026048-60440575, default 1026048):
Last sector, +/-sectors or +/-size K,M,G,T,P  (534528-60440575, default 60440575):
Created a new partition 2 of type 'Linux' and of size 28.3 GiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
$ sudo mkfs -t vfat -n BPI-UBOOT /dev/sdb1
mkfs.fat 4.2 (2021-01-31)
The bootloader image gets written 8k into the SD card (our first partition starts at sector 2048, i.e. 1M into the device, so there s plenty of space here):
noodles@buildhost:~/BPI$ sudo dd if=u-boot-build/u-boot-sunxi-with-spl.bin of=/dev/sdb bs=1024 seek=8
483+1 records in
483+1 records out
494900 bytes (495 kB, 483 KiB) copied, 0.0282234 s, 17.5 MB/s
Copy the Debian installer files onto the VFAT partition:
noodles@buildhost:~/BPI$ cp -r debian-installer/ /media/noodles/BPI-UBOOT/
Unmount the SD from the build host, pop it into the M2 Zero, boot it up while connected to the serial console, hit a key to stop autoboot and tell it to boot the installer:
U-Boot SPL 2023.07.02 (Aug 08 2023 - 09:05:44 +0100)
DRAM: 512 MiB
Trying to boot from MMC1
U-Boot 2023.07.02 (Aug 08 2023 - 09:05:44 +0100) Allwinner Technology
CPU:   Allwinner H3 (SUN8I 1680)
Model: Banana Pi BPI-M2-Zero
DRAM:  512 MiB
Core:  60 devices, 17 uclasses, devicetree: separate
WDT:   Not starting watchdog@1c20ca0
MMC:   mmc@1c0f000: 0, mmc@1c10000: 1
Loading Environment from FAT... Unable to read "uboot.env" from mmc0:1...
In:    serial
Out:   serial
Err:   serial
Net:   No ethernet found.
Hit any key to stop autoboot:  0
=> setenv dibase /debian-installer/armhf
=> fatload mmc 0:1 $ kernel_addr_r  $ dibase /vmlinuz
5333504 bytes read in 225 ms (22.6 MiB/s)
=> setenv bootargs "console=ttyS0,115200n8"
=> fatload mmc 0:1 $ fdt_addr_r  $ dibase /dtbs/sun8i-h2-plus-bananapi-m2-zero.dtb
25254 bytes read in 7 ms (3.4 MiB/s)
=> fdt addr $ fdt_addr_r  0x40000
Working FDT set to 43000000
=> fatload mmc 0:1 $ ramdisk_addr_r  $ dibase /initrd.gz
31693887 bytes read in 1312 ms (23 MiB/s)
=> bootz $ kernel_addr_r  $ ramdisk_addr_r :$ filesize  $ fdt_addr_r 
Kernel image @ 0x42000000 [ 0x000000 - 0x516200 ]
## Flattened Device Tree blob at 43000000
   Booting using the fdt blob at 0x43000000
Working FDT set to 43000000
   Loading Ramdisk to 481c6000, end 49fffc3f ... OK
   Loading Device Tree to 48183000, end 481c5fff ... OK
Working FDT set to 48183000
Starting kernel ...
At this point the installer runs and you can do a normal install. Well, except the wifi wasn t detected, I think because the netinst images don t include firmware. I spent a bit of time trying to figure out how to include it but ultimately ended up installing over a USB ethernet dongle, which Just Worked and was less faff. Installing firmware-brcm80211 once installation completed allowed the built-in wifi to work fine. After install you need to configure u-boot to boot without intervention. At the u-boot prompt (i.e. after hitting a key to stop autoboot):
=> setenv bootargs "console=ttyS0,115200n8 root=LABEL=BPI-ROOT ro"
=> setenv bootcmd 'ext4load mmc 0:2 $ fdt_addr_r  /boot/sun8i-h2-plus-bananapi-m2-zero.dtb ; fdt addr $ fdt_addr_r  0x40000 ; ext4load mmc 0:2 $ kernel_addr_r  /boot/vmlinuz ; ext4load mmc 0:2 $ ramdisk_addr_r  /boot/initrd.img ; bootz $ kernel_addr_r  $ ramdisk_addr_r :$ filesize  $ fdt_addr_r '
=> saveenv
Saving Environment to FAT... OK
=> reset
This is assuming you have /boot on partition 2 on the SD - I left the first partition as VFAT (that s where the u-boot environment will be saved) and just used all of the rest as a single ext4 partition. I did have to do an e2label /dev/sdb2 BPI-ROOT to label / appropriately; otherwise I occasionally saw the SD card appear as mmc1 for Linux (I m guessing due to asynchronous boot order with the wifi). You should now find the device boots without intervention.

11 October 2023

Russell Coker: The PineTime

I have just got a PineTime smart watch [1] from Pine64. They cost $US27 each which ended up as $144.63 Australian for three including postage when I ordered on the 16th of September, it s annoying that you can t order more than 3 at a time to reduce postage costs. The Australian online store Kogan has smart watches starting at about $15 [2] with Bluetooth and support for phone notifications so the $48.21 for a PineTime doesn t compare well on just price and features. The watches Kogan sells start getting into high resolution at around the $25 price and many of them have features like 24*7 heart monitoring that the PineTime lacks (it just measures when you request it). No-one would order a PineTime for being cheap or having lots of features, you order it because you want open hardware that allows you to do things your way. Also the PineTime isn t going to be orphaned while it s likely that in a few years most of the cheap watches sold by Kogan etc won t support the new phones running the latest version of Android. The screen of the PineTime is 240*240 resolution (about 260dpi) with 64k colors. The screen resolution is lower than some high-end smart watches but higher than most phones and almost all monitors. I doubt that much benefit could be gained from higher resolution. Even on minimum brightness the screen is easy to read on all but the brightest sunny days. The compute capabilities are 4.5MB of flash storage, 64k of RAM, and a 64MHz CPU this can t run Linux and nothing like it will run Linux for a long time. I ve had the PineTime for 6 days now, I charged it once and it s now at 55% battery. It looks like it will last close to 2 weeks on a single charge and it s claimed that a newer firmware will make the battery last longer. Software The main Android app for using with the PineTime is GadgetBridge which I installed from the f-droid repository. It had lots of click-through menus for allowing access to various Android features (contacts, bluetooth, draw over foreground, location, and more) but after that it was easy to setup. It was the first bluetooth device I ve used which had a 6 digit PIN for connecting to a phone. Initially I used the PineTime with my Huawei Nova 7i [3]. The aim is to eventually have it run from my PinePhonePro but my test of the PinePhonePro didn t go as well as hoped [4]. Now I m using it on my Huawei Mate 10 Pro. It comes with InfiniTime [5] installed as the default firmware, mine had 1.11.0 which is a fairly recent version. I will probably upgrade it soon to get the better power optimisation and weather alerts in the watch face. I don t have any plans to use different watch firmware and I don t have any plans to contribute to firmware development I just can t hack on every FOSS project around it s better to do big contributions to a small number of projects. For people who don t want the default firmware the Wasp-OS project seems interesting as it s written in Python [6], I don t like Python but it s very popular. Python is particularly popular in ML development, it will be interesting to see if Wasp-OS becomes a preferred platform for smart watches that talk to GPT servers. Generally the software works well, one annoyance is that when a notification goes away on the phone it remains on the PineTime and has to be manually dismissed. It would be nice if clearing notifications on the phone would clear them on the PineTime too. The music control works with RocketPlayer on Android, it displays the track name and has options for pause/play and skipping forward and backward one track. Annoyingly the current firmware doesn t allow configuring the main screens, from the primary screen you swipe down for notifications, right for settings, up for menus, and there s nothing defined for swipe left. I d like to make swipe left the command to get to music control. Hardware It has a detachable band that appears to be within the common range of watch bands. According to the PineTime Wiki page [7] there are a selection of alternate bands that will fit it, but some don t because the band is recessed into the watch. It is IP67 rated which means you can probably wear it while swimming. The charging contacts are exposed on the bottom of the case which means that any chemicals left by pool water can be cleaned off and also as they are apparently not expected to be harmed by sweat and skin oil there shouldn t be a problem charging it. I have significant experience using a Samsung Galaxy S5 Mini which is rated at IP67 in swimming pools. I had two problems with the S5 Mini when getting out of the pool, firstly water in the headphone socket made the phone consider that it was in headphone mode and turn off the speakers and secondly it took hours to become dry enough to charge and after many swims the charge rate dropped presumably due to oxide on the contacts. There are reports of success when swimming with a PineTime. Generally it feels well made and appears more solid than the cheapest Kogan devices appear to be. Conclusion If I wanted monitoring for medical reasons then I would choose a different smart watch. I ve read about people doing things like tracking their body stats 24*7 and trying to discover useful things, the PineTime is not a good option for BioHacking type use. However if I did have a need for such things I d probably just buy a second smart watch and have one on each wrist. The PineTime generally works well. It s a pity it has fewer hardware features than closed devices that are cheaper. But having a firmware that can be continually improved by the community is good. The continually expanding use of mobile phone technology devices for custom use in corporations (such as mobile phone in custom case for scanning prices etc in a supermarket) has some potential for use with this. I can imagine someone adding some custom features to a PineTime for such use. When a supermarket chain has 200,000 employees (as Woolworths in Australia does) then paying for a few months of software development work to make a smart watch do specific things for that company could provide significant value. There are probably some business opportunities for FOSS developers to hack on extra hardware on a PineTime and write software to support it. I recommend that everyone who s into FOSS buy one of these. Preferably make a deal with two friends to get the minimum postage cost.

10 October 2023

Matthias Klumpp: How to indicate device compatibility for your app in MetaInfo data

At the moment I am hard at work putting together the final bits for the AppStream 1.0 release (hopefully to be released this month). The new release comes with many new new features, an improved developer API and removal of most deprecated things (so it carefully breaks compatibility with very old data and the previous C API). One of the tasks for the upcoming 1.0 release was #481 asking about a formal way to distinguish Linux phone applications from desktop applications. AppStream infamously does not support any is-for-phone label for software components, instead the decision whether something is compatible with a device is based the the device s capabilities and the component s requirements. This allows for truly adaptive applications to describe their requirements correctly, and does not lock us into form factors going into the future, as there are many and the feature range between a phone, a tablet and a tiny laptop is quite fluid. Of course the match to current device capabilities check does not work if you are a website ranking phone compatibility. It also does not really work if you are a developer and want to know which devices your component / application will actually be considered compatible with. One goal for AppStream 1.0 is to have its library provide more complete building blocks to software centers. Instead of just a here s the data, interpret it according to the specification API, libappstream now interprets the specification for the application and provides API to handle most common operations like checking device compatibility. For developers, AppStream also now implements a few virtual chassis configurations , to roughly gauge which configurations a component may be compatible with. To test the new code, I ran it against the large Debian and Flatpak repositories to check which applications are considered compatible with what chassis/device type already. The result was fairly disastrous, with many applications not specifying compatibility correctly (many do, but it s by far not the norm!). Which brings me to the actual topic of this blog post: Very few seem to really know how to mark an application compatible with certain screen sizes and inputs! This is most certainly a matter of incomplete guides and good templates, so maybe this post can help with that a bit:

The ultimate cheat-sheet to mark your app chassis-type compatible As a quick reminder, compatibility is indicated using AppStream s relations system: A requires relation indicates that the system will not run at all or will run terribly if the requirement is not met. If the requirement is not met, it should not be installable on a system. A recommends relation means that it would be advantageous to have the recommended items, but it s not essential to run the application (it may run with a degraded experience without the recommended things though). And a supports relation means a given interface/device/control/etc. is supported by this application, but the application may work completely fine without it.

I have a desktop-only application A desktop-only application is characterized by needing a larger screen to fit the application, and requiring a physical keyboard and accurate mouse input. This type is assumed by default if no capabilities are set for an application, but it s better to be explicit. This is the metadata you need:
<component type="desktop-application">
  <id>org.example.desktopapp</id>
  <name>DesktopApp</name>
  [...]
  <requires>
    <display_length>768</display_length>
    <control>keyboard</control>
    <control>pointing</control>
  </requires>
  [...]
</component>
With this requires relation, you require a small-desktop sized screen (at least 768 device-independent pixels (dp) on its smallest edge) and require a keyboard and mouse to be present / connectable. Of course, if your application needs more minimum space, adjust the requirement accordingly. Note that if the requirement is not met, your application may not be offered for installation.
Note: Device-independent / logical pixels One logical pixel (= device independent pixel) roughly corresponds to the visual angle of one pixel on a device with a pixel density of 96 dpi (for historical X11 reasons) and a distance from the observer of about 52 cm, making the physical pixel about 0.26 mm in size. When using logical pixels as unit, they might not always map to exact physical lengths as their exact size is defined by the device providing the display. They do however accurately depict the maximum amount of pixels that can be drawn in the depicted direction on the device s display space. AppStream always uses logical pixels when measuring lengths in pixels.

I have an application that works on mobile and on desktop / an adaptive app Adaptive applications have fewer hard requirements, but a wide range of support for controls and screen sizes. For example, they support touch input, unlike desktop apps. An example MetaInfo snippet for these kind of apps may look like this:
<component type="desktop-application">
  <id>org.example.adaptive_app</id>
  <name>AdaptiveApp</name>
  [...]
  <requires>
    <display_length>360</display_length>
  </requires>
  <supports>
    <control>keyboard</control>
    <control>pointing</control>
    <control>touch</control>
  </supports>
  [...]
</component>
Unlike the pure desktop application, this adaptive application requires a much smaller lowest display edge length, and also supports touch input, in addition to keyboard and mouse/touchpad precision input.

I have a pure phone/table app Making an application a pure phone application is tricky: We need to mark it as compatible with phones only, while not completely preventing its installation on non-phone devices (even though its UI is horrible, you may want to test the app, and software centers may allow its installation when requested explicitly even if they don t show it by default). This is how to achieve that result:
<component type="desktop-application">
  <id>org.example.phoneapp</id>
  <name>PhoneApp</name>
  [...]
  <requires>
    <display_length>360</display_length>
  </requires>
  <recommends>
    <display_length compare="lt">1280</display_length>
    <control>touch</control>
  </recommends>
  [...]
</component>
We require a phone-sized display minimum edge size (adjust to a value that is fit for your app!), but then also recommend the screen to have a smaller edge size than a larger tablet/laptop, while also recommending touch input and not listing any support for keyboard and mouse. Please note that this blog post is of course not a comprehensive guide, so if you want to dive deeper into what you can do with requires/recommends/suggests/supports, you may want to have a look at the relations tags described in the AppStream specification.

Validation It is still easy to make mistakes with the system requirements metadata, which is why AppStream 1.0 will provide more commands to check MetaInfo files for system compatibility. Current pre-1.0 AppStream versions already have an is-satisfied command to check if the application is compatible with the currently running operating system:
:~$ appstreamcli is-satisfied ./org.example.adaptive_app.metainfo.xml
Relation check for: */*/*/org.example.adaptive_app/*
Requirements:
   Unable to check display size: Can not read information without GUI toolkit access.
Recommendations:
   No recommended items are set for this software.
Supported:
   Physical keyboard found.
   Pointing device (e.g. a mouse or touchpad) found.
   This software supports touch input.
In addition to this command, AppStream 1.0 will introduce a new one as well: check-syscompat. This command will check the component against libappstream s mock system configurations that define a most common (whatever that is at the time) configuration for a respective chassis type. If you pass the --details flag, you can even get an explanation why the component was considered or not considered for a specific chassis type:
:~$ appstreamcli check-syscompat --details ./org.example.phoneapp.metainfo.xml
Chassis compatibility check for: */*/*/org.example.phoneapp/*
Desktop:
   Incompatible
   recommends: This software recommends a display with its shortest edge
   being << 1280 px in size, but the display of this device has 1280 px.
   recommends: This software recommends a touch input device.
Laptop:
   Incompatible
   recommends: This software recommends a display with its shortest edge 
   being << 1280 px in size, but the display of this device has 1280 px.
   recommends: This software recommends a touch input device.
Server:
   Incompatible
   requires: This software needs a display for graphical content.
   recommends: This software needs a display for graphical content.
   recommends: This software recommends a touch input device.
Tablet:
   Compatible (100%)
Handset:
   Compatible (100%)
I hope this is helpful for people. Happy metadata writing!

6 October 2023

Emanuele Rocca: Custom Debian Installer and Kernel on a USB stick

There are many valid reasons to create a custom Debian Installer image. You may need to pass some special arguments to the kernel, use a different GRUB version, automate the installation by means of preseeding, use a custom kernel, or modify the installer itself.
If you have a EFI system, which is probably the case in 2023, there is no need to learn complex procedures in order to create a custom Debian Installer stick.
The source of many frustrations is that the ISO format for CDs/DVDs is read-only, but you can just create a VFAT filesystem on a USB stick, copy all ISO contents onto the stick itself, and modify things at will.

Create a writable USB stick
First create a FAT32 filesystem on the removable device and mount it. The device is sdX in the example.
$ sudo parted --script /dev/sdX mklabel msdos
$ sudo parted --script /dev/sdX mkpart primary fat32 0% 100%
$ sudo mkfs.vfat /dev/sdX1
$ sudo mount /dev/sdX1 /mnt/data/
Then copy to the USB stick the installer ISO you would like to modify, debian-testing-amd64-netinst.iso here.
$ sudo kpartx -v -a debian-testing-amd64-netinst.iso
# Mount the first partition on the ISO and copy its contents to the stick
$ sudo mount /dev/mapper/loop0p1 /mnt/cdrom/
$ sudo rsync -av /mnt/cdrom/ /mnt/data/
$ sudo umount /mnt/cdrom
# Same story with the second partition on the ISO
$ sudo mount /dev/mapper/loop0p2 /mnt/cdrom/
$ sudo rsync -av /mnt/cdrom/ /mnt/data/
$ sudo umount /mnt/cdrom
$ sudo kpartx -d debian-testing-amd64-netinst.iso
$ sudo umount /mnt/data
Now try booting from the USB stick just to verify that everything went well and we can start customizing the image.

Boot loader, preseeding, installer hacks
The easiest things we can change now are the shim, GRUB, and GRUB s configuration. The USB stick contains the shim under /EFI/boot/bootx64.efi, while GRUB is at /EFI/boot/grubx64.efi. This means that if you want to test a different shim / GRUB version, you just replace the relevant files. That s it. Take for example /usr/lib/grub/x86_64-efi/monolithic/grubx64.efi from the package grub-efi-amd64-bin, or the signed version from grub-efi-amd64-signed and copy them under /EFI/boot/grubx64.efi. Or perhaps you want to try out systemd-boot? Then take /usr/lib/systemd/boot/efi/systemd-bootx64.efi from the package systemd-boot-efi, copy it to /EFI/boot/bootx64.efi and you re good to go. Figuring out the right systemd-boot configuration needed to start the Installer is left as an exercise.
By editing /boot/grub/grub.cfg you can pass arbitrary arguments to the kernel and the Installer itself. See the official Installation Guide for a comprehensive list of boot parameters.
One very commong thing to do is automating the installation using a preseed file. Add the following to the kernel command line: preseed/file=/cdrom/preseed.cfg and create a /preseed.cfg file on the USB stick. As a little example:
d-i time/zone select Europe/Rome
d-i passwd/root-password this-is-the-root-password
d-i passwd/root-password-again this-is-the-root-password
d-i passwd/user-fullname string Emanuele Rocca
d-i passwd/username string ema
d-i passwd/user-password password lol-haha-uh
d-i passwd/user-password-again password lol-haha-uh
d-i apt-setup/no_mirror boolean true
d-i popularity-contest/participate boolean true
tasksel tasksel/first multiselect standard
See Steve McIntyre s awesome page with the full list of available settings and their description: https://preseed.einval.com/debian-preseed/.
Two noteworthy settings are early_command and late_command. They can be used to execute arbitrary commands and provide thus extreme flexibility! You can go as far as replacing parts of the installer with a sed command, or maybe wgetting an entirely different file. This is a fairly easy way to test minor Installer patches. As an example, I ve once used this to test a patch to grub-installer:
d-i partman/early_command string wget https://people.debian.org/~ema/grub-installer-1035085-1 -O /usr/bin/grub-installer
Finally, the initrd contains all early stages of the installer. It s easy to unpack it, modify whatever component you like, and repack it. Say you want to change a given udev rule:
$ mkdir /tmp/new-initrd
$ cd /tmp/new-initrd
$ zstdcat /mnt/data/install.a64/initrd.gz   sudo cpio -id
$ vi lib/udev/rules.d/60-block.rules
$ find .   cpio -o -H newc   zstd --stdout > /mnt/data/install.a64/initrd.gz

Custom udebs
From a basic architectural standpoint the Debian Installer can be seen as an initrd that loads a series of special Debian packages called udebs. In the previous section we have seen how to (ab)use early_command to replace one of the scripts used by the Installer, namely grub-installer. It turns out that such script is installed by a udeb, so let s do things right and build a new Installer ISO with our custom grub udeb.
Fetch the code for the grub-installer udeb, make your changes and build it with a classic dpkg-buildpackage -rfakeroot.
Then get the Installer code and install all dependencies:
$ git clone https://salsa.debian.org/installer-team/debian-installer/
$ cd debian-installer/
$ sudo apt build-dep .
Now add the grub-installer udeb to the localudebs directory and create a new netboot image:
$ cp /path/to/grub-installer_1.198_arm64.udeb build/localudebs/
$ cd build
$ fakeroot make clean_netboot build_netboot
Give it some time, soon enough you ll have a brand new ISO to test under dest/netboot/mini.iso.

Custom kernel
Perhaps there s a kernel configuration option you need to enable, or maybe you need a more recent kernel version than what is available in sid.
The Debian Linux Kernel Handbook has all the details for how to do things properly, but here s a quick example.
Get the Debian kernel packaging from salsa and generate the upstream tarball:
$ git clone https://salsa.debian.org/kernel-team/linux/
$ ./debian/bin/genorig.py https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
For RC kernels use the repo from Linus instead of linux-stable.
Now do your thing, for instance change a config setting by editing debian/config/amd64/config. Don t worry about where you put it in the file, there s a tool from https://salsa.debian.org/kernel-team/kernel-team to fix that:
$ /path/to/kernel-team/utils/kconfigeditor2/process.py .
Now build your kernel:
$ export MAKEFLAGS=-j$(nproc)
$ export DEB_BUILD_PROFILES='pkg.linux.nokerneldbg pkg.linux.nokerneldbginfo pkg.linux.notools nodoc'
$ debian/rules orig
$ debian/rules debian/control
$ dpkg-buildpackage -b -nc -uc
After some time, if everything went well, you should get a bunch of .deb files as well as a .changes file, linux_6.6~rc3-1~exp1_arm64.changes here. To generate the udebs used by the Installer you need to first get a linux-signed .dsc file, and then build it with sbuild in this example:
$ /path/to/kernel-team/scripts/debian-test-sign linux_6.6~rc3-1~exp1_arm64.changes
$ sbuild --dist=unstable --extra-package=$PWD linux-signed-arm64_6.6~rc3+1~exp1.dsc
Excellent, now you should have a ton of .udebs. To build a custom installer image with this kernel, copy them all under debian-installer/build/localudebs/ and then run fakeroot make clean_netboot build_netboot as described in the previous section. In case you are trying to use a different kernel version from what is currently in sid, you will have to install the linux-image package on the system building the ISO, and change LINUX_KERNEL_ABI in build/config/common. The linux-image dependency in debian/control probably needs to be tweaked as well.
That s it, the new Installer ISO should boot with your custom kernel!
There is going to be another minor obstacle though, as anna will complain that your new kernel cannot be found in the archive. Copy the kernel udebs you have built onto a vfat formatted USB stick, switch to a terminal, and install them all with udpkg:
~ # udpkg -i *.udeb
Now the installation should proceed smoothly.

22 September 2023

Ravi Dwivedi: Debconf23

Official logo of DebConf23

Introduction DebConf23, the 24th annual Debian Conference, was held in India in the city of Kochi, Kerala from the 3rd to the 17th of September, 2023. Ever since I got to know about it (which was more than an year ago), I was excited to attend DebConf in my home country. This was my second DebConf, as I attended one last year in Kosovo. I was very happy that I didn t need to apply for a visa to attend. I got full bursary to attend the event (thanks a lot to Debian for that!) which is always helpful in covering the expenses, especially if the venue is a five star hotel :) For the conference, I submitted two talks. One was suggested by Sahil on Debian packaging for beginners, while the other was suggested by Praveen who opined that a talk covering broader topics about freedom in self-hosting services will be better, when I started discussing about submitting a talk about prav app project. So I submitted one on Debian packaging for beginners and the other on ideas on sustainable solutions for self-hosting. My friend Suresh - who is enthusiastic about Debian and free software - wanted to attend the DebConf as well. When the registration started, I reminded him about applying. We landed in Kochi on the 28th of August 2023 during the festival of Onam. We celebrated Onam in Kochi, had a trip to Wayanad, and returned to Kochi. On the evening of the 3rd of September, we reached the venue - Four Points Hotel by Sheraton, at Infopark Kochi, Ernakulam, Kerala, India.
Suresh and me celebrating Onam in Kochi.

Hotel overview The hotel had 14 floors, and featured a swimming pool and gym (these were included in our package). The hotel gave us elevator access for only our floor, along with public spaces like the reception, gym, swimming pool, and dining areas. The temperature inside the hotel was pretty cold and I had to buy a jacket to survive. Perhaps the hotel was in cahoots with winterwear companies? :)
Four Points Hotel by Sheraton was the venue of DebConf23. Photo credits: Bilal
Photo of the pool. Photo credits: Andreas Tille.
View from the hotel window.

Meals On the first day, Suresh and I had dinner at the eatery on the third floor. At the entrance, a member of the hotel staff asked us about how many people we wanted a table for. I told her that it s just the two of us at the moment, but (as we are attending a conference) we might be joined by others. Regardless, they gave us a table for just two. Within a few minutes, we were joined by Alper from Turkey and urbec from Germany. So we shifted to a larger table but then we were joined by even more people, so we were busy adding more chairs to our table. urbec had already been in Kerala for the past 5-6 days and was, on one hand, very happy already with the quality and taste of bananas in Kerala and on the other, rather afraid of the spicy food :) Two days later, the lunch and dinner were shifted to the All Spice Restaurant on the 14th floor, but the breakfast was still served at the eatery. Since the eatery (on the 3rd floor) had greater variety of food than the other venue, this move made breakfast the best meal for me and many others. Many attendees from outside India were not accustomed to the spicy food. It is difficult for locals to help them, because what we consider mild can be spicy for others. It is not easy to satisfy everyone at the dining table, but I think the organizing team did a very good job in the food department. (That said, it didn t matter for me after a point, and you will know why.) The pappadam were really good, and I liked the rice labelled Kerala rice . I actually brought that exact rice and pappadam home during my last trip to Kochi and everyone at my home liked it too (thanks to Abhijit PA). I also wished to eat all types of payasams from Kerala and this really happened (thanks to Sruthi who designed the menu). Every meal had a different variety of payasam and it was awesome, although I didn t like some of them, mostly because they were very sweet. Meals were later shifted to the ground floor (taking away the best breakfast option which was the eatery).
This place served as lunch and dinner place and later as hacklab during debconf. Photo credits: Bilal

The excellent Swag Bag The DebConf registration desk was at the second floor. We were given a very nice swag bag. They were available in multiple colors - grey, green, blue, red - and included an umbrella, a steel mug, a multiboot USB drive by Mostly Harmless, a thermal flask, a mug by Canonical, a paper coaster, and stickers. It rained almost every day in Kochi during our stay, so handing out an umbrella to every attendee was a good idea.
Picture of the awesome swag bag given at DebConf23. Photo credits: Ravi Dwivedi

A gift for Nattie During breakfast one day, Nattie (Belgium) expressed the desire to buy a coffee filter. The next time I went to the market, I bought a coffee filter for her as a gift. She seemed happy with the gift and was flattered to receive a gift from a young man :)

Being a mentor There were many newbies who were eager to learn and contribute to Debian. So, I mentored whoever came to me and was interested in learning. I conducted a packaging workshop in the bootcamp, but could only cover how to set up the Debian Unstable environment, and had to leave out how to package (but I covered that in my talk). Carlos (Brazil) gave a keysigning session in the bootcamp. Praveen was also mentoring in the bootcamp. I helped people understand why we sign GPG keys and how to sign them. I planned to take a workshop on it but cancelled it later.

My talk My Debian packaging talk was on the 10th of September, 2023. I had not prepared slides for my Debian packaging talk in advance - I thought that I could do it during the trip, but I didn t get the time so I prepared them on the day before the talk. Since it was mostly a tutorial, the slides did not need much preparation. My thanks to Suresh, who helped me with the slides and made it possible to complete them in such a short time frame. My talk was well-received by the audience, going by their comments. I am glad that I could give an interesting presentation.
My presentation photo. Photo credits: Valessio

Visiting a saree shop After my talk, Suresh, Alper, and I went with Anisa and Kristi - who are both from Albania, and have a never-ending fascination for Indian culture :) - to buy them sarees. We took autos to Kakkanad market and found a shop with a great variety of sarees. I was slightly familiar with the area around the hotel, as I had been there for a week. Indian women usually don t try on sarees while buying - they just select the design. But Anisa wanted to put one on and take a few photos as well. The shop staff did not have a trial saree for this purpose, so they took a saree from a mannequin. It took about an hour for the lady at the shop to help Anisa put on that saree but you could tell that she was in heaven wearing that saree, and she bought it immediately :) Alper also bought a saree to take back to Turkey for his mother. Me and Suresh wanted to buy a kurta which would go well with the mundu we already had, but we could not find anything to our liking.
Selfie with Anisa and Kristi. Photo credits: Anisa.

Cheese and Wine Party On the 11th of September we had the Cheese and Wine Party, a tradition of every DebConf. I brought Kaju Samosa and Nankhatai from home. Many attendees expressed their appreciation for the samosas. During the party, I was with Abhas and had a lot of fun. Abhas brought packets of paan and served them at the Cheese and Wine Party. We discussed interesting things and ate burgers. But due to the restrictive alcohol laws in the state, it was less fun compared to the previous DebConfs - you could only drink alcohol served by the hotel in public places. If you bought your own alcohol, you could only drink in private places (such as in your room, or a friend s room), but not in public places.
Me helping with the Cheese and Wine Party.

Party at my room Last year, Joenio (Brazilian) brought pastis from France which I liked. He brought the same alocholic drink this year too. So I invited him to my room after the Cheese and Wine party to have pastis. My idea was to have them with my roommate Suresh and Joenio. But then we permitted Joenio to bring as many people as he wanted and he ended up bringing some ten people. Suddenly, the room was crowded. I was having good time at the party, serving them the snacks given to me by Abhas. The news of an alcohol party at my room spread like wildfire. Soon there were so many people that the AC became ineffective and I found myself sweating. I left the room and roamed around in the hotel for some fresh air. I came back after about 1.5 hours - for most part, I was sitting at the ground floor with TK Saurabh. And then I met Abraham near the gym (which was my last meeting with him). I came back to my room at around 2:30 AM. Nobody seemed to have realized that I was gone. They were thanking me for hosting such a good party. A lot of people left at that point and the remaining people were playing songs and dancing (everyone was dancing all along!). I had no energy left to dance and to join them. They left around 03:00 AM. But I am glad that people enjoyed partying in my room.
This picture was taken when there were few people in my room for the party.

Sadhya Thali On the 12th of September, we had a sadhya thali for lunch. It is a vegetarian thali served on a banana leaf on the eve of Thiruvonam. It wasn t Thiruvonam on this day, but we got a special and filling lunch. The rasam and payasam were especially yummy.
Sadhya Thali: A vegetarian meal served on banana leaf. Payasam and rasam were especially yummy! Photo credits: Ravi Dwivedi.
Sadhya thali being served at debconf23. Photo credits: Bilal

Day trip On the 13th of September, we had a daytrip. I chose the daytrip houseboat in Allepey. Suresh chose the same, and we registered for it as soon as it was open. This was the most sought-after daytrip by the DebConf attendees - around 80 people registered for it. Our bus was set to leave at 9 AM on the 13th of September. Me and Suresh woke up at 8:40 and hurried to get to the bus in time. It took two hours to reach the venue where we get the houseboat. The houseboat experience was good. The trip featured some good scenery. I got to experience the renowned Kerala backwaters. We were served food on the boat. We also stopped at a place and had coconut water. By evening, we came back to the place where we had boarded the boat.
Group photo of our daytrip. Photo credits: Radhika Jhalani

A good friend lost When we came back from the daytrip, we received news that Abhraham Raji was involved in a fatal accident during a kayaking trip. Abraham Raji was a very good friend of mine. In my Albania-Kosovo-Dubai trip last year, he was my roommate at our Tirana apartment. I roamed around in Dubai with him, and we had many discussions during DebConf22 Kosovo. He was the one who took the photo of me on my homepage. I also met him in MiniDebConf22 Palakkad and MiniDebConf23 Tamil Nadu, and went to his flat in Kochi this year in June. We had many projects in common. He was a Free Software activist and was the designer of the DebConf23 logo, in addition to those for other Debian events in India.
A selfie in memory of Abraham.
We were all fairly shocked by the news. I was devastated. Food lost its taste, and it became difficult to sleep. That night, Anisa and Kristi cheered me up and gave me company. Thanks a lot to them. The next day, Joenio also tried to console me. I thank him for doing a great job. I thank everyone who helped me in coping with the difficult situation. On the next day (the 14th of September), the Debian project leader Jonathan Carter addressed and announced the news officially. THe Debian project also mentioned it on their website. Abraham was supposed to give a talk, but following the incident, all talks were cancelled for the day. The conference dinner was also cancelled. As I write, 9 days have passed since his death, but even now I cannot come to terms with it.

Visiting Abraham s house On the 15th of September, the conference ran two buses from the hotel to Abraham s house in Kottayam (2 hours ride). I hopped in the first bus and my mood was not very good. Evangelos (Germany) was sitting opposite me, and he began conversing with me. The distraction helped and I was back to normal for a while. Thanks to Evangelos as he supported me a lot on that trip. He was also very impressed by my use of the StreetComplete app which I was using to edit OpenStreetMap. In two hours, we reached Abraham s house. I couldn t control myself and burst into tears. I went to see the body. I met his family (mother, father and sister), but I had nothing to say and I felt helpless. Owing to the loss of sleep and appetite over the past few days, I had no energy, and didn t think it was good idea for me to stay there. I went back by taking the bus after one hour and had lunch at the hotel. I withdrew my talk scheduled for the 16th of September.

A Japanese gift I got a nice Japanese gift from Niibe Yutaka (Japan) - a folder to keep papers which had ancient Japanese manga characters. He said he felt guilty as he swapped his talk with me and so it got rescheduled from 12th September to 16 September which I withdrew later.
Thanks to Niibe Yutaka (the person towards your right hand) from Japan (FSIJ), who gave me a wonderful Japanese gift during debconf23: A folder to keep pages with ancient Japanese manga characters printed on it. I realized I immediately needed that :)
This is the Japanese gift I received.

Group photo On the 16th of September, we had a group photo. I am glad that this year I was more clear in this picture than in DebConf22.
Click to enlarge

Volunteer work and talks attended I attended the training session for the video team and worked as a camera operator. The Bits from DPL was nice. I enjoyed Abhas presentation on home automation. He basically demonstrated how he liberated Internet-enabled home devices. I also liked Kristi s presentation on ways to engage with the GNOME community.
Bits from the DPL. Photo credits: Bilal
Kristi on GNOME community. Photo credits: Ravi Dwivedi.
Abhas' talk on home automation. Photo credits: Ravi Dwivedi.
I also attended lightning talks on the last day. Badri, Wouter, and I gave a demo on how to register on the Prav app. Prav got a fair share of advertising during the last few days.
I was roaming around with a QR code on my T-shirt for downloading Prav.

The night of the 17th of September Suresh left the hotel and Badri joined me in my room. Thanks to the efforts of Abhijit PA, Kiran, and Ananthu, I wore a mundu.
Me in mundu. Picture credits: Abhijith PA
I then joined Kalyani, Mangesh, Ruchika, Anisa, Ananthu and Kiran. We took pictures and this marked the last night of DebConf23.

Departure day The 18th of September was the day of departure. Badri slept in my room and left early morning (06:30 AM). I dropped him off at the hotel gate. The breakfast was at the eatery (3rd floor) again, and it was good. Sahil, Saswata, Nilesh, and I hung out on the ground floor.
From left: Nilesh, Saswata, me, Sahil. Photo credits: Sahil.
I had an 8 PM flight from Kochi to Delhi, for which I took a cab with Rhonda (Austria), Michael (Nigeria) and Yash (India). We were joined by other DebConf23 attendees at the Kochi airport, where we took another selfie.
Ruchika (taking the selfie) and from left to right: Yash, Joost (Netherlands), me, Rhonda
Joost and I were on the same flight, and we sat next to each other. He then took a connecting flight from Delhi to Netherlands, while I went with Yash to the New Delhi Railway Station, where we took our respective trains. I reached home on the morning of the 19th of September, 2023.
Joost and me going to Delhi. Photo credits: Ravi.

Big thanks to the organizers DebConf23 was hard to organize - strict alcohol laws, weird hotel rules, death of a close friend (almost a family member), and a scary notice by the immigration bureau. The people from the team are my close friends and I am proud of them for organizing such a good event. None of this would have been possible without the organizers who put more than a year-long voluntary effort to produce this. In the meanwhile, many of them had organized local events in the time leading up to DebConf. Kudos to them. The organizers also tried their best to get clearance for countries not approved by the ministry. I am also sad that people from China, Kosovo, and Iran could not join. In particular, I feel bad for people from Kosovo who wanted to attend but could not (as India does not consider their passport to be a valid travel document), considering how we Indians were so well-received in their country last year.

Note about myself I am writing this on the 22nd of September, 2023. It took me three days to put up this post - this was one of the tragic and hard posts for me to write. I have literally forced myself to write this. I have still not recovered from the loss of my friend. Thanks a lot to all those who helped me. PS: Credits to contrapunctus for making grammar, phrasing, and capitalization changes.

21 September 2023

Jonathan Carter: DebConf23

I very, very nearly didn t make it to DebConf this year, I had a bad cold/flu for a few days before I left, and after a negative covid-19 test just minutes before my flight, I decided to take the plunge and travel. This is just everything in chronological order, more or less, it s the only way I could write it.

DebCamp I planned to spend DebCamp working on various issues. Very few of them actually got done, I spent the first few days in bed further recovering, took a covid-19 test when I arrived and after I felt better, and both were negative, so not sure what exactly was wrong with me, but between that and catching up with other Debian duties, I couldn t make any progress on catching up on the packaging work I wanted to do. I ll still post what I intended here, I ll try to take a few days to focus on these some time next month: Calamares / Debian Live stuff:
  • #980209 installation fails at the install boot loader phase
  • #1021156 calamares-settings-debian: Confusing/generic program names
  • #1037299 Install Debian -> Untrusted application launcher
  • #1037123 Minimal HD space required too small for some live images
  • #971003 Console auto-login doesn t work with sysvinit
At least Calamares has been trixiefied in testing, so there s that! Desktop stuff:
  • #1038660 please set a placeholder theme during development, different from any release
  • #1021816 breeze: Background image not shown any more
  • #956102 desktop-base: unwanted metadata within images
  • #605915 please mtheake it a non-native package
  • #681025 Put old themes in a new package named desktop-base-extra
  • #941642 desktop-base: split theme data files and desktop integrations in separate packages
The Egg theme that I want to develop for testing/unstable is based on Juliette Taka s Homeworld theme that was used for Bullseye. Egg, as in, something that hasn t quite hatched yet. Get it? (for #1038660) Debian Social:
  • Set up Lemmy instance
    • I started setting up a Lemmy instance before DebCamp, and meant to finish it.
  • Migrate PeerTube to new server
    • We got a new physical server for our PeerTube instance, we should have more space for growth and it would help us fix the streaming feature on our platform.
Loopy: I intended to get the loop for DebConf in good shape before I left, so that we can spend some time during DebCamp making some really nice content, unfortunately this went very tumbly, but at least we ended up with a loopy that kind of worked and wasn t too horrible. There s always another DebConf to try again, right?
So DebCamp as a usual DebCamp was pretty much a wash (fitting with all the rain we had?) for me, at least it gave me enough time to recover a bit for DebConf proper, and I had enough time left to catch up on some critical DPL duties and put together a few slides for the Bits from the DPL talk.

DebConf Bits From the DPL I had very, very little available time to prepare something for Bits fro the DPL, but I managed to put some slides together (available on my wiki page). I mostly covered:
  • A very quick introduction of myself (I ve done this so many times, it feels redundant giving my history every time), and some introduction on what it is that the DPL does. I declared my intent not to run for DPL again, and the reasoning behind it, and a few bits of information for people who may intend to stand for DPL next year.
  • The sentiment out there for the Debian 12 release (which has been very positive). How we include firmware by default now, and that we re saying goodbye to architectures both GNU/KFreeBSD and mipsel.
  • Debian Day and the 30th birthday party celebrations from local groups all over the world (and a reminder about the Local Groups BoF later in the week).
  • I looked forward to Debian 13 (trixie!), and how we re gaining riscv64 as a release architecture, as well as loongarch64, and that plans seem to be forming to fix 2k38 in Debian, and hopefully largely by the time the Trixie release comes by.
  • I made some comments about Enterprise Linux as people refer to the RHEL eco-system these days, how really bizarre some aspects of it is (like the kernel maintenance), and that some big vendors are choosing to support systems outside of that eco-system now (like CPanel now supporting Ubuntu too). I closed with the quote below from Ian Murdock, and assured the audience that if they want to go out and make money with Debian, they are more than welcome too.
Job Fair I walked through the hallway where the Job Fair was hosted, and enjoyed all the buzz. It s not always easy to get this right, but this year it was very active and energetic, I hope lots of people made some connections! Cheese & Wine Due to state laws and alcohol licenses, we couldn t consume alcohol from outside the state of Kerala in the common areas of the hotel (only in private rooms), so this wasn t quite as big or as fun as our usual C&W parties since we couldn t share as much from our individual countries and cultures, but we always knew that this was going to be the case for this DebConf, and it still ended up being alright. Day Trip I opted for the forest / waterfalls daytrip. It was really, really long with lots of time in the bus. I think our trip s organiser underestimated how long it would take between the points on the route (all in all it wasn t that far, but on a bus on a winding mountain road, it takes long). We left at 8:00 and only found our way back to the hotel around 23:30. Even though we arrived tired and hungry, we saw some beautiful scenery, animals and also met indigenous river people who talked about their struggles against being driven out of their place of living multiple times as government invests in new developments like dams and hydro power. Photos available in the DebConf23 public git repository. Losing a beloved Debian Developer during DebConf To our collective devastation, not everyone made it back from their day trips. Abraham Raji was out to the kayak day trip, and while swimming, got caught by a whirlpool from a drainage system. Even though all of us were properly exhausted and shocked in disbelief at this point, we had to stay up and make some tough decisions. Some initially felt that we had to cancel the rest of DebConf. We also had to figure out how to announce what happened asap both to the larger project and at DebConf in an official manner, while ensuring that due diligence took place and that the family is informed by the police first before making anything public. We ended up cancelling all the talks for the following day, with an address from the DPL in the morning to explain what had happened. Of all the things I ve ever had to do as DPL, this was by far the hardest. The day after that, talks were also cancelled for the morning so that we could attend his funeral. Dozens of DebConf attendees headed out by bus to go pay their final respects, many wearing the t-shirts that Abraham had designed for DebConf. A book of condolences was set up so that everyone who wished to could write a message on how they remembered him. The book will be kept by his family.
Today marks a week since his funeral, and I still feel very raw about it. And even though there was uncertainty whether DebConf should even continue after his death, in hindsight I m glad that everyone pushed forward. While we were all heart broken, it was also heart warming to see people care for each other in all of this. If anything, I think I needed more time at DebConf just to be in that warm aura of emotional support for just a bit longer. There are many people who I wanted to talk to who I barely even had a chance to see. Abraham, or Abru as he was called by some people (which I like because bru in Afrikaans is like bro in English, not sure if that s what it implied locally too) enjoyed artistic pursuits, but he was also passionate about knowledge transfer. He ran classes at DebConf both last year and this year (and I think at other local events too) where he taught people packaging via a quick course that he put together. His enthusiasm for Debian was contagious, a few of the people who he was mentoring came up to me and told me that they were going to see it through and become a DD in honor of him. I can t even remember how I reacted to that, my brain was already so worn out and stitching that together with the tragedy of what happened while at DebConf was just too much for me. I first met him in person last year in Kosovo, I already knew who he was, so I think we interacted during the online events the year before. He was just one of those people who showed so much promise, and I was curious to see what he d achieve in the future. Unfortunately, we was taken away from us too soon. Poetry Evening Later in the week we had the poetry evening. This was the first time I had the courage to recite something. I read Ithaka by C.P. Cavafy (translated by Edmund Keely). The first time I heard about this poem was in an interview with Julian Assange s wife, where she mentioned that he really loves this poem, and it caught my attention because I really like the Weezer song Return to Ithaka and always wondered what it was about, so needless to say, that was another rabbit hole at some point. Group Photo Our DebConf photographer organised another group photo for this event, links to high-res versions available on Aigar s website.
BoFs I didn t attend nearly as many talks this DebConf as I would ve liked (fortunately I can catch up on video, should be released soon), but I did make it to a few BoFs. In the Local Groups BoF, representatives from various local teams were present who introduced themselves and explained what they were doing. From memory (sorry if I left someone out), we had people from Belgium, Brazil, Taiwan and South Africa. We talked about types of events a local group could do (BSPs, Mini DC, sprints, Debian Day, etc. How to help local groups get started, booth kits for conferences, and setting up some form of calendar that lists important Debian events in a way that makes it easier for people to plan and co-ordinate. There s a mailing list for co-ordination of local groups, and the irc channel is -localgroups on oftc.
If you got one of these Cheese & Wine bags from DebConf, that s from the South African local group!
In the Debian.net BoF, we discussed the Debian.net hosting service, where Debian pays for VMs hosted for projects by individual DDs on Debian.net. The idea is that we start some form of census that monitors the services, whether they re still in use, whether the system is up to date, whether someone still cares for it, etc. We had some discussion about where the lines of responsibility are drawn, and we can probably make things a little bit more clear in the documentation. We also want to offer more in terms of backups and monitoring (currently DDs do get 500GB from rsync.net that could be used for backups of their services though). The intention is also to deploy some form of configuration management for some essentials across the hosts. We should also look at getting some sponsored hosting for this. In the Debian Social BoF, we discussed some services that need work / expansion. In particular, Matrix keeps growing at an increased rate as more users use it and more channels are bridged, so it will likely move to its own host with big disks soon. We might replace Pleroma with a fork called Akkoma, this will need some more home work and checking whether it s even feasible. Some services haven t really been used (like Writefreely and Plume), and it might be time to retire them. We might just have to help one or two users migrate some of their posts away if we do retire them. Mjolner seems to do a fine job at spam blocking, we haven t had any notable incidents yet. WordPress now has improved fediverse support, it s unclear whether it works on a multi-site instance yet, I ll test it at some point soon and report back. For upcoming services, we are implementing Lemmy and probably also Mobilizon. A request was made that we also look into Loomio. More Information Overload There s so much that happens at DebConf, it s tough to take it all in, and also, to find time to write about all of it, but I ll mention a few more things that are certainly worth of note. During DebConf, we had some people from the Kite Linux team over. KITE supplies the ICT needs for the primary and secondary schools in the province of Kerala, where they all use Linux. They decided to switch all of these to Debian. There was an ad-hoc BoF where locals were listening and fielding questions that the Kite Linux team had. It was great seeing all the energy and enthusiasm behind this effort, I hope someone will properly blog about this! I learned about the VGLUG Foundation, who are doing a tremendous job at promoting GNU/Linux in the country. They are also training up 50 people a year to be able to provide tech support for Debian. I came across the booth for Mostly Harmless, they liberate old hardware by installing free firmware on there. It was nice seeing all the devices out there that could be liberated, and how it can breathe new life into old harware.
Some hopefully harmless soldering.
Overall, the community and their activities in India are very impressive, and I wish I had more time to get to know everyone better. Food Oh yes, one more thing. The food was great. I tasted more different kinds of curry than I ever did in my whole life up to this point. The lunch on banana leaves was interesting, and also learning how to eat this food properly by hand (thanks to the locals who insisted on teaching me!), it was a fruitful experience? This might catch on at home too less dishes to take care of! Special thanks to the DebConf23 Team I think this may have been one of the toughest DebConfs to organise yet, and I don t think many people outside of the DebConf team knows about all the challenges and adversity this team has faced in organising it. Even just getting to the previous DebConf in Kosovo was a long and tedious and somewhat risky process. Through it all, they were absolute pro s. Not once did I see them get angry or yell at each other, whenever a problem came up, they just dealt with it. They did a really stellar job and I did make a point of telling them on the last day that everyone appreciated all the work that they did. Back to my nest I bought Dax a ball back from India, he seems to have forgiven me for not taking him along.
I ll probably take a few days soon to focus a bit on my bugs and catch up on my original DebCamp goals. If you made it this far, thanks for reading! And thanks to everyone for being such fantastic people.

Jonathan McDowell: DebConf23 Writeup

DebConf2023 Logo (I wrote this up for an internal work post, but I figure it s worth sharing more publicly too.) I spent last week at DebConf23, this years instance of the annual Debian conference, which was held in Kochi, India. As usual, DebConf provides a good reason to see a new part of the world; I ve been going since 2004 (Porto Alegre, Brazil), and while I ve missed a few (Mexico, Bosnia, and Switzerland) I ve still managed to make it to instances on 5 continents. This has absolutely nothing to do with work, so I went on my own time + dime, but I figured a brief write-up might prove of interest. I first installed Debian back in 1999 as a machine that was being co-located to operate as a web server / email host. I was attracted by the promise of easy online upgrades (or, at least, upgrades that could be performed without the need to be physically present at the machine, even if they naturally required a reboot at some point). It has mostly delivered on this over the years, and I ve never found a compelling reason to move away. I became a Debian Developer in 2000. As a massively distributed volunteer project DebConf provides an opportunity to find out what s happening in other areas of the project, catch up with team mates, and generally feel more involved and energised to work on Debian stuff. Also, by this point in time, a lot of Debian folk are good friends and it s always nice to catch up with them. On that point, I felt that this year the hallway track was not quite the same as usual. For a number of reasons (COVID, climate change, travel time, we re all getting older) I think fewer core teams are achieving critical mass at DebConf - I was the only member physically present from 2 teams I m involved in, and I d have appreciated the opportunity to sit down with both of them for some in-person discussions. It also means it s harder to use DebConf as a venue for advancing major changes; previously having all the decision makers in the same space for a week has meant it s possible to iron out the major discussion points, smoothing remote implementation after the conference. I m told the mini DebConfs are where it s at for these sorts of meetings now, so perhaps I ll try to attend at least one of those next year. Of course, I also went to a bunch of talks. I have differing levels of comment about each of them, but I ve written up some brief notes below about the ones I remember something about. The comment was made that we perhaps had a lower level of deep technical talks, which is perhaps true but I still think there were a number of high level technical talks that served to pique ones interest about the topic. Finally, this DebConf was the first I m aware of that was accompanied by tragedy; as part of the day trip Abraham Raji, a project member and member of the local team, was involved in a fatal accident.

Talks (videos not yet up for all, but should appear for most)
  • Opening Ceremony
    Not much to say here; welcome to DebConf!
  • Continuous Key-Signing Party introduction
    I ended up running this, as Gunnar couldn t make it. Debian makes heavy use of the OpenPGP web of trust (no mass ability to send out Yubikeys + perform appropriate levels of identity verification), so making sure we re appropriately cross-signed, and linked to local conference organisers, is a dull but important part of the conference. We use a modified keysigning approach where identity verification + fingerprint confirmation happens over the course of the conference, so this session was just to explain how that works and confirm we were all working from the same fingerprint list.
  • State of Stateless - A Talk about Immutability and Reproducibility in Debian
    Stateless OSes seem to be gaining popularity, so I went along to this to see if there was anything of note. It was interesting, but nothing earth shattering - very high level.
  • What s missing so that Debian is finally reproducible?
    Reproducible builds are something I ve been keeping an eye on for a long time, and I continue to be impressed by the work folks are putting into this - both for Debian, and other projects. From a security standpoint reproducible builds provide confidence against trojaned builds, and from a developer standpoint knowing you can build reproducibly helps with not having to keep a whole bunch of binary artefacts around.
  • Hello from keyring-maint
    In the distant past the process of getting your OpenPGP key into the Debian keyring (which is used to authenticate uploads + votes, amongst other things) was a clunky process that was often stalled. This hasn t been the case for at least the past 10 years, but there s still a residual piece of project memory that thinks keyring is a blocker. So as a team we say hi and talk about the fact we do monthly updates and generally are fairly responsive these days.
  • A declarative approach to Linux networking with Netplan
    Debian s /etc/network/interfaces is a fairly basic (if powerful) mechanism for configuring network interfaces. NetworkManager is a better bet for dynamic hosts (i.e. clients), and systemd-network seems to be a good choice for servers (I m gradually moving machines over to it). Netplan tries to provide a unified mechanism for configuring both with a single configuration language. A noble aim, but I don t see a lot of benefit for anything I use - my NetworkManager hosts are highly dynamic (so no need to push shared config) and systemd-network (or /etc/network/interfaces) works just fine on the other hosts. I m told Netplan has more use with more complicated setups, e.g. when OpenVSwitch is involved.
  • Quick peek at ZFS, A too good to be true file system and volume manager.
    People who use ZFS rave about it. I m naturally suspicious of any file system that doesn t come as part of my mainline kernel. But, as a longtime cautious mdraid+lvm+ext4 user I appreciate that there have been advances in the file system space that maybe I should look at, and I ve been trying out btrfs on more machines over the past couple of years. I can t deny ZFS has a bunch of interesting features, but nothing I need/want that I can t get from an mdraid+lvm+btrfs stack (in particular data checksumming + reflinks for dedupe were strong reasons to move to btrfs over ext4).
  • Bits from the DPL
    Exactly what it says on the tin; some bits from the DPL.
  • Adulting
    Enrico is always worth hearing talk; Adulting was no exception. Main takeaway is that we need to avoid trying to run the project on martyrs and instead make sure we build a sustainable project. I ve been trying really hard to accept I just don t have time to take on additional responsibilities, no matter how interesting or relevant they might seem, so this resonated.
  • My life in git, after subversion, after CVS.
    Putting all of your home directory in revision control. I ve never made this leap; I ve got some Ansible playbooks that push out my core pieces of configuration, which is held in git, but I don t actually check this out directly on hosts I have accounts on. Interesting, but not for me.
  • EU Legislation BoF - Cyber Resilience Act, Product Liability Directive and CSAM Regulation
    The CRA seems to be a piece of ill informed legislation that I m going to have to find time to read properly. Discussion was a bit more alarmist than I personally feel is warranted, but it was a short session, had a bunch of folk in it, and even when I removed my mask it was hard to make myself understood.
  • What s new in the Linux kernel (and what s missing in Debian)
    An update from Ben about new kernel features. I m paying less attention to such things these days, so nice to get a quick overview of it all.
  • Intro to SecureDrop, a sort-of Linux distro
    Actually based on Ubuntu, but lots of overlap with Debian as a result, and highly customised anyway. Notable, to me, for using OpenPGP as some of the backend crypto support. I managed to talk to Kunal separately about some of the pain points around that, which was an interesting discussion - they re trying to move from GnuPG to Sequoia, primarily because of the much easier integration and lack of requirement for the more complicated GnuPG features that sometimes get in the way.
  • The Docker(.io) ecosystem in Debian
    I hate Docker. I m sure it s fine if you accept it wants to take over the host machine entirely, but when I ve played around with it that s not been the case. This talk was more about the difficulty of trying to keep a fast moving upstream with lots of external dependencies properly up to date in a stable release. Vendoring the deps and trying to get a stable release exception seems like the least bad solution, but it s a problem that affects a growing number of projects.
  • Chiselled containers
    This was kinda of interesting, but I think I missed the piece about why more granular packaging wasn t an option. The premise is you can take an existing .deb and chisel it into smaller components, which then helps separate out dependencies rather than pulling in as much as the original .deb would. This was touted as being useful, in particular, for building targeted containers. Definitely appealing over custom built userspaces for containers, but in an ideal world I think we d want the information in the main packaging and it becomes a lot of work.
  • Debian Contributors shake-up
    Debian Contributors is a great site for massaging your ego around contributions to Debian; it s also a useful point of reference from a data protection viewpoint in terms of information the project holds about contributors - everything is already public, but the Contributors website provides folk with an easy way to find their own information (with various configurable options about whether that s made public or not). T ssia is working on improving the various data feeds into the site, but realistically this is the responsibility of every Debian service owner.
  • New Member BOF
    I m part of the teams that help get new folk into Debian - primarily as a member of the New Member Front Desk, but also as a mostly inactive Application Manager. It s been a while since we did one of these sessions so the Front Desk/Debian Account Managers that were present did a panel session. Nothing earth shattering came out of it; like keyring-maint this is a team that has historically had problems, but is currently running smoothly.

20 September 2023

Joey Hess: Haskell webassembly in the browser


live demo As far as I know this is the first Haskell program compiled to Webassembly (WASM) with mainline ghc and using the browser DOM. ghc's WASM backend is solid, but it only provides very low-level FFI bindings when used in the browser. Ints and pointers to WASM memory. (See here for details and for instructions on getting the ghc WASM toolchain I used.) I imagine that in the future, WASM code will interface with the DOM by using a WASI "world" that defines a complete API (and browsers won't include Javascript engines anymore). But currently, WASM can't do anything in a browser without calling back to Javascript. For this project, I needed 63 lines of (reusable) javascript (here). Plus another 18 to bootstrap running the WASM program (here). (Also browser_wasi_shim) But let's start with the Haskell code. A simple program to pop up an alert in the browser looks like this:
 -# LANGUAGE OverloadedStrings #- 
import Wasmjsbridge
foreign export ccall hello :: IO ()
hello :: IO ()
hello = do
    alert <- get_js_object_method "window" "alert"
    call_js_function_ByteString_Void alert "hello, world!"
A larger program that draws on the canvas and generated the image above is here. The Haskell side of the FFI interface is a bunch of fairly mechanical functions like this:
foreign import ccall unsafe "call_js_function_string_void"
    _call_js_function_string_void :: Int -> CString -> Int -> IO ()
call_js_function_ByteString_Void :: JSFunction -> B.ByteString -> IO ()
call_js_function_ByteString_Void (JSFunction n) b =
      BU.unsafeUseAsCStringLen b $ \(buf, len) ->
                _call_js_function_string_void n buf len
Many more would need to be added, or generated, to continue down this path to complete coverage of all data types. All in all it's 64 lines of code so far (here). Also a C shim is needed, that imports from WASI modules and provides C functions that are used by the Haskell FFI. It looks like this:
void _call_js_function_string_void(uint32_t fn, uint8_t *buf, uint32_t len) __attribute__((
        __import_module__("wasmjsbridge"),
        __import_name__("call_js_function_string_void")
));
void call_js_function_string_void(uint32_t fn, uint8_t *buf, uint32_t len)  
        _call_js_function_string_void(fn, buf, len);
 
Another 64 lines of code for that (here). I found this pattern in Joachim Breitner's haskell-on-fastly and copied it rather blindly. Finally, the Javascript that gets run for that is:
call_js_function_string_void(n, b, sz)  
    const fn = globalThis.wasmjsbridge_functionmap.get(n);
    const buffer = globalThis.wasmjsbridge_exports.memory.buffer;
    fn(decoder.decode(new Uint8Array(buffer, b, sz)));
 ,
Notice that this gets an identifier representing the javascript function to run, which might be any method of any object. It looks it up in a map and runs it. And the ByteString that got passed from Haskell has to be decoded to a javascript string. In the Haskell program above, the function is document.alert. Why not pass a ByteString with that through the FFI? Well, you could. But then it would have to eval it. That would make running WASM in the browser be evaling Javascript every time it calls a function. That does not seem like a good idea if the goal is speed. GHC's javascript backend does use Javascript FFI snippets like that, but there they get pasted into the generated Javascript hairball, so no eval is needed. So my code has things like get_js_object_method that look up things like Javascript functions and generate identifiers. It also has this:
call_js_function_ByteString_Object :: JSFunction -> B.ByteString -> IO JSObject
Which can be used to call things like document.getElementById that return a javascript object:
getElementById <- get_js_object_method (JSObjectName "document") "getElementById"
canvas <- call_js_function_ByteString_Object getElementById "myCanvas"
Here's the Javascript called by get_js_object_method. It generates a Javascript function that will be used to call the desired method of the object, and allocates an identifier for it, and returns that to the caller.
get_js_objectname_method(ob, osz, nb, nsz)  
    const buffer = globalThis.wasmjsbridge_exports.memory.buffer;
    const objname = decoder.decode(new Uint8Array(buffer, ob, osz));
    const funcname = decoder.decode(new Uint8Array(buffer, nb, nsz));
    const func = function (...args)   return globalThis[objname][funcname](...args)  ;
    const n = globalThis.wasmjsbridge_counter + 1;
    globalThis.wasmjsbridge_counter = n;
    globalThis.wasmjsbridge_functionmap.set(n, func);
    return n;
 ,
This does mean that every time a Javascript function id is looked up, some more memory is used on the Javascript side. For more serious uses of this, something would need to be done about that. Lots of other stuff like object value getting and setting is also not implemented, there's no support yet for callbacks, and so on. Still, I'm happy where this has gotten to after 12 hours of work on it. I might release the reusable parts of this as a Haskell library, although it seems likely that ongoing development of ghc will make it obsolete. In the meantime, clone the git repo to have a play with it.
This blog post was sponsored by unqueued on Patreon.

15 September 2023

John Goerzen: How Gapped is Your Air?

Sometimes we want better-than-firewall security for things. For instance:
  1. An industrial control system for a municipal water-treatment plant should never have data come in or out
  2. Or, a variant of the industrial control system: it should only permit telemetry and monitoring data out, and nothing else in or out
  3. A system dedicated to keeping your GPG private keys secure should only have material to sign (or decrypt) come in, and signatures (or decrypted data) go out
  4. A system keeping your tax records should normally only have new records go in, but may on occasion have data go out (eg, to print a copy of an old record)
In this article, I ll talk about the high side (the high-security or high-sensitivity systems) and the low side (the lower-sensitivity or general-purpose systems). For the sake of simplicity, I ll assume the high side is a single machine, but it could as well be a whole network. Let s focus on examples 3 and 4 to make things simpler. Let s consider the primary concern to be data exfiltration (someone stealing your data), with a secondary concern of data integrity (somebody modifying or destroying your data). You might think the safest possible approach is Airgapped that is, there is literal no physical network connection to the machine at all. This help! But then, the problem becomes: how do we deal with the inevitable need to legitimately get things on or off of the system? As I wrote in Dead USB Drives Are Fine: Building a Reliable Sneakernet, by using tools such as NNCP, you can certainly create a sneakernet : using USB drives as transport. While this is a very secure setup, as with most things in security, it s less than perfect. The Wikipedia airgap article discusses some ways airgapped machines can still be exploited. It mentions that security holes relating to removable media have been exploited in the past. There are also other ways to get data out; for instance, Debian ships with gensio and minimodem, both of which can transfer data acoustically. But let s back up and think about why we think of airgapped machines as so much more secure, and what the failure modes of other approaches might be.

What about firewalls? You could very easily set up high-side machine that is on a network, but is restricted to only one outbound TCP port. There could be a local firewall, and perhaps also a special port on an external firewall that implements the same restrictions. A variant on this approach would be two computers connected directly by a crossover cable, though this doesn t necessarily imply being more secure. Of course, the concern about a local firewall is that it could potentially be compromised. An external firewall might too; for instance, if your credentials to it were on a machine that got compromised. This kind of dual compromise may be unlikely, but it is possible. We can also think about the complexity in a network stack and firewall configuration, and think that there may be various opportunities to have things misconfigured or buggy in a system of that complexity. Another consideration is that data could be sent at any time, potentially making it harder to detect. On the other hand, network monitoring tools are commonplace. On the other hand, it is convenient and cheap. I use a system along those lines to do my backups. Data is sent, gpg-encrypted and then encrypted again at the NNCP layer, to the backup server. The NNCP process on the backup server runs as an untrusted user, and dumps the gpg-encrypted files to a secure location that is then processed by a cron job using Filespooler. The backup server is on a dedicated firewall port, with a dedicated subnet. The only ports allowed out are for NNCP and NTP, and offsite backups. There is no default gateway. Not even DNS is permitted out (the firewall does the appropriate redirection). There is one pinhole allowed out, where a subset of the backup data is sent offsite. I initially used USB drives as transport, and it had no network connection at all. But there were disadvantages to doing this for backups particularly that I d have no backups for as long as I d forget to move the drives. The backup system also would have clock drift, and the offsite backup picture was more challenging. (The clock drift was a problem because I use 2FA on the system; a password, plus a TOTP generated by a Yubikey) This is pretty good security, I d think. What are the weak spots? Well, if there were somehow a bug in the NNCP client, and the remote NNCP were compromised, that could lead to a compromise of the NNCP account. But this itself would accomplish little; some other vulnerability would have to be exploited on the backup server, because the NNCP account can t see plaintext data at all. I use borgbackup to send a subset of backup data offsite over ssh. borgbackup has to run as root to be able to access all the files, but the ssh it calls runs as a separate user. A ssh vulnerability is therefore unlikely to cause much damage. If, somehow, the remote offsite system were compromised and it was able to exploit a security issue in the local borgbackup, that would be a problem. But that sounds like a remote possibility. borgbackup itself can t even be used over a sneakernet since it is not asynchronous. A more secure solution would probably be using something like dar over NNCP. This would eliminate the ssh installation entirely, and allow a complete isolation between the data-access and the communication stacks, and notably not require bidirectional communication. Logic separation matters too. My Roundup of Data Backup and Archiving Tools may be helpful here. Other attack vectors could be a vulnerability in the kernel s networking stack, local root exploits that could be combined with exploiting NNCP or borgbackup to gain root, or local misconfiguration that makes the sandboxes around NNCP and borgbackup less secure. Because this system is in my basement in a utility closet with no chairs and no good place for a console, I normally manage it via a serial console. While it s a dedicated line between the system and another machine, if the other machine is compromised or an adversary gets access to the physical line, credentials (and perhaps even data) could leak, albeit slowly. But we can do much better with serial lines. Let s take a look.

Serial lines Some of us remember RS-232 serial lines and their once-ubiquitous DB-9 connectors. Traditionally, their speed maxxed out at 115.2Kbps. Serial lines have the benefit that they can be a direct application-to-application link. In my backup example above, a serial line could directly link the NNCP daemon on one system with the NNCP caller on another, with no firewall or anything else necessary. It is simply up to those programs to open the serial device appropriately. This isn t perfect, however. Unlike TCP over Ethernet, a serial line has no inherent error checking. Modern programs such as NNCP and ssh assume that a lower layer is making the link completely clean and error-free for them, and will interpret any corruption as an attempt to tamper and sever the connection. However, there is a solution to that: gensio. In my page Using gensio and ser2net, I discuss how to run NNCP and ssh over gensio. gensio is a generic framework that can add framing, error checking, and retransmit to an unreliable link such as a serial port. It can also add encryption and authentication using TLS, which could be particularly useful for applications that aren t already doing that themselves. More traditional solutions for serial communications have their own built-in error correction. For instance, UUCP and Kermit both were designed in an era of noisy serial lines and might be an excellent fit for some use cases. The ZModem protocol also might be, though it offers somewhat less flexibility and automation than Kermit. I have found that certain USB-to-serial adapters by Gearmo will actually run at up to 2Mbps on a serial line! Look for the ones on their spec pages with a FTDI chipset rated at 920Kbps. It turns out they can successfully be driven faster, especially if gensio s relpkt is used. I ve personally verified 2Mbps operation (Linux port speed 2000000) on Gearmo s USA-FTDI2X and the USA-FTDI4X. (I haven t seen any single-port options from Gearmo with the 920Kbps chipset, but they may exist). Still, even at 2Mbps, speed may well be a limiting factor with some applications. If what you need is a console and some textual or batch data, it s probably fine. If you are sending 500GB backup files, you might look for something else. In theory, this USB to RS-422 adapter should work at 10Mbps, but I haven t tried it. But if the speed works, running a dedicated application over a serial link could be a nice and fairly secure option. One of the benefits of the airgapped approach is that data never leaves unless you are physically aware of transporting a USB stick. Of course, you may not be physically aware of what is ON that stick in the event of a compromise. This could easily be solved with a serial approach by, say, only plugging in the cable when you have data to transfer.

Data diodes A traditional diode lets electrical current flow in only one direction. A data diode is the same concept, but for data: a hardware device that allows data to flow in only one direction. This could be useful, for instance, in the tax records system that should only receive data, or the industrial system that should only send it. Wikipedia claims that the simplest kind of data diode is a fiber link with transceivers connected in only one direction. I think you could go one simpler: a serial cable with only ground and TX connected at one end, wired to ground and RX at the other. (I haven t tried this.) This approach does have some challenges:
  • Many existing protocols assume a bidirectional link and won t be usable
  • There is a challenge of confirming data was successfully received. For a situation like telemetry, maybe it doesn t matter; another observation will come along in a minute. But for sending important documents, one wants to make sure they were properly received.
In some cases, the solution might be simple. For instance, with telemetry, just writing out data down the serial port in a simple format may be enough. For sending files, various mitigations, such as sending them multiple times, etc., might help. You might also look into FEC-supporting infrastructure such as blkar and flute, but these don t provide an absolute guarantee. There is no perfect solution to knowing when a file has been successfully received if the data communication is entirely one-way.

Audio transport I hinted above that minimodem and gensio both are software audio modems. That is, you could literally use speakers and microphones, or alternatively audio cables, as a means of getting data into or out of these systems. This is pretty limited; it is 1200bps, and often half-duplex, and could literally be disrupted by barking dogs in some setups. But hey, it s an option.

Airgapped with USB transport This is the scenario I began with, and named some of the possible pitfalls above as well. In addition to those, note also that USB drives aren t necessarily known for their error-free longevity. Be prepared for failure.

Concluding thoughts I wanted to lay out a few things in this post. First, that simply being airgapped is generally a step forward in security, but is not perfect. Secondly, that both physical and logical separation matter. And finally, that while tools like NNCP can make airgapped-with-USB-drive-transport a doable reality, there are also alternatives worth considering especially serial ports, firewalled hard-wired Ethernet, data diodes, and so forth. I think serial links, in particular, have been largely forgotten these days. Note: This article also appears on my website, where it may be periodically updated.

13 September 2023

Matthew Garrett: Reconstructing an invalid TPM event log

TPMs contain a set of registers ("Platform Configuration Registers", or PCRs) that are used to track what a system boots. Each time a new event is measured, a cryptographic hash representing that event is passed to the TPM. The TPM appends that hash to the existing value in the PCR, hashes that, and stores the final result in the PCR. This means that while the PCR's value depends on the precise sequence and value of the hashes presented to it, the PCR value alone doesn't tell you what those individual events were. Different PCRs are used to store different event types, but there are still more events than there are PCRs so we can't avoid this problem by simply storing each event separately.

This is solved using the event log. The event log is simply a record of each event, stored in RAM. The algorithm the TPM uses to calculate the PCR values is known, so we can reproduce that by simply taking the events from the event log and replaying the series of events that were passed to the TPM. If the final calculated value is the same as the value in the PCR, we know that the event log is accurate, which means we now know the value of each individual event and can make an appropriate judgement regarding its security.

If any value in the event log is invalid, we'll calculate a different PCR value and it won't match. This isn't terribly helpful - we know that at least one entry in the event log doesn't match what was passed to the TPM, but we don't know which entry. That means we can't trust any of the events associated with that PCR. If you're trying to make a security determination based on this, that's going to be a problem.

PCR 7 is used to track information about the secure boot policy on the system. It contains measurements of whether or not secure boot is enabled, and which keys are trusted and untrusted on the system in question. This is extremely helpful if you want to verify that a system booted with secure boot enabled before allowing it to do something security or safety critical. Unfortunately, if the device gives you an event log that doesn't replay correctly for PCR 7, you now have no idea what the security state of the system is.

We ran into that this week. Examination of the event log revealed an additional event other than the expected ones - a measurement accompanied by the string "Boot Guard Measured S-CRTM". Boot Guard is an Intel feature where the CPU verifies the firmware is signed with a trusted key before executing it, and measures information about the firmware in the process. Previously I'd only encountered this as a measurement into PCR 0, which is the PCR used to track information about the firmware itself. But it turns out that at least some versions of Boot Guard also measure information about the Boot Guard policy into PCR 7. The argument for this is that this is effectively part of the secure boot policy - having a measurement of the Boot Guard state tells you whether Boot Guard was enabled, which tells you whether or not the CPU verified a signature on your firmware before running it (as I wrote before, I think Boot Guard has user-hostile default behaviour, and that enforcing this on consumer devices is a bad idea).

But there's a problem here. The event log is created by the firmware, and the Boot Guard measurements occur before the firmware is executed. So how do we get a log that represents them? That one's fairly simple - the firmware simply re-calculates the same measurements that Boot Guard did and creates a log entry after the fact[1]. All good.

Except. What if the firmware screws up the calculation and comes up with a different answer? The entry in the event log will now not match what was sent to the TPM, and replaying will fail. And without knowing what the actual value should be, there's no way to fix this, which means there's no way to verify the contents of PCR 7 and determine whether or not secure boot was enabled.

But there's still a fundamental source of truth - the measurement that was sent to the TPM in the first place. Inspired by Henri Nurmi's work on sniffing Bitlocker encryption keys, I asked a coworker if we could sniff the TPM traffic during boot. The TPM on the board in question uses SPI, a simple bus that can have multiple devices connected to it. In this case the system flash and the TPM are on the same SPI bus, which made things easier. The board had a flash header for external reprogramming of the firmware in the event of failure, and all SPI traffic was visible through that header. Attaching a logic analyser to this header made it simple to generate a record of that. The only problem was that the chip select line on the header was attached to the firmware flash chip, not the TPM. This was worked around by simply telling the analysis software that it should invert the sense of the chip select line, ignoring all traffic that was bound for the flash and paying attention to all other traffic. This worked in this case since the only other device on the bus was the TPM, but would cause problems in the event of multiple devices on the bus all communicating.

With the aid of this analyser plugin, I was able to dump all the TPM traffic and could then search for writes that included the "0182" sequence that corresponds to the command code for a measurement event. This gave me a couple of accesses to the locality 3 registers, which was a strong indication that they were coming from the CPU rather than from the firmware. One was for PCR 0, and one was for PCR 7. This corresponded to the two Boot Guard events that we expected from the event log. The hash in the PCR 0 measurement was the same as the hash in the event log, but the hash in the PCR 7 measurement differed from the hash in the event log. Replacing the event log value with the value actually sent to the TPM resulted in the event log now replaying correctly, supporting the hypothesis that the firmware was failing to correctly reconstruct the event.

What now? The simple thing to do is for us to simply hard code this fixup, but longer term we'd like to figure out how to reconstruct the event so we can calculate the expected value ourselves. Unfortunately there doesn't seem to be any public documentation on this. Sigh.

[1] What stops firmware on a system with no Boot Guard faking those measurements? TPMs have a concept of "localities", effectively different privilege levels. When Boot Guard performs its initial measurement into PCR 0, it does so at locality 3, a locality that's only available to the CPU. This causes PCR 0 to be initialised to a different initial value, affecting the final PCR value. The firmware can't access locality 3, so can't perform an equivalent measurement, so can't fake the value.

comment count unavailable comments

20 August 2023

Dirk Eddelbuettel: RcppRedis 0.2.4 on CRAN: Maintenance

Another minor release, now at 0.2.4, of our RcppRedis package arrived on CRAN yesterday. RcppRedis is one of several packages connecting R to the fabulous Redis in-memory datastructure store (and much more). RcppRedis does not pretend to be feature complete, but it may do some things faster than the other interfaces, and also offers an optional coupling with MessagePack binary (de)serialization via RcppMsgPack. The package has carried production loads on a trading floor for several years. It also supports pub/sub dissemination of streaming market data as per this earlier example. This update is (just like the previous one) fairly mechanical. CRAN noticed a shortcoming of the default per-package help page in a number of packages, in our case it was matter of adding one line for a missing alias to the Rd file. We also demoted the mention of the suggested (but retired) rredis package to a mere mention in the DESCRIPTION file as a formal Suggests: entry, even with an added Additional_repositories, create a NOTE. Life is simpler without those, The detailed changes list follows.

Changes in version 0.2.4 (2023-08-19)
  • Add missing alias for RcppRedis-package to rhiredis.Rd.
  • Remove Suggests: rredis which triggers a NOTE nag as it is only on an Additional_repositories .

Courtesy of my CRANberries, there is also a diffstat report for this this release. More information is on the RcppRedis page. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russell Coker: GPT Systems and Relationships

Sam Hartman wrote an interesting blog post about his work as a sex and intimacy educator and how GPT systems could impact that [1]. I ve read some positive reviews of Replika a commercial system that is somewhat promoted as a counsellor [2], so I decided to try it out. In my brief trial it seemed to be using all the methods that Android pay to play games are known for. Having multiple types of in-game currency, pay to buy new clothes etc for your friend, etc. Basically it seems pretty horrible. I didn t pay for it and the erotic and romantic features all require payment so I didn t test that. When thinking about this logically, having a system designed to deal with people when they are vulnerable (either being in a romantic relationship or getting counselling) that uses manipulative techniques to get money from them can t have a good result. So a free software system seems the best option. When I first learned of virtual girlfriends I never thought I would feel compelled to advocate for a free software virtual dating program, but that s where the world has got to. Virtual girlfriends have been around for years now. Several years ago I watched a documentary about their use in Japan. It seemed a bit strange when a group of men who had virtual girlfriends had a dinner party with their tablets and phones propped up so their girlfriends could join in as they all appeared to be dating the same girl. The documentary didn t go in to enough detail to cover whether the girlfriend app could learn or be customised enough that they would seem to have different personalities. Virtual boyfriends have also been around for a while apparently without most people noticing. I just Googled it and found a review of a virtual boyfriend app published in 2016! One thing that will probably concern people is the possibility for virtual dating systems to be used for inappropriate things. That is a reasonable thing to be concerned about but I don t think it s possible to prevent technology that has already been released from doing such things. As a general rule technology can always be used for good and bad things so we need to just make it easy to do good things and let the legal system develop ways of dealing with the bad things.

16 August 2023

Sam Hartman: A First Exercise with AI Training

Taking a hands-on low-level approach to learning AI has been incredibly rewarding. I wanted to create an achievable task that would motivate me to learn the tools and get practical experience training and using large language models. Just at the point when I was starting to spin up GPU instances, Llama2 was released to the public. So I elected to start with that model. As I mentioned, I m interested in exploring how sex-positive AI can help human connection in positive ways. For that reason, I suspected that Llama2 might not produce good results without training: some of Meta s safety goals run counter to what I m trying to explore. I suspected that there might be more attention paid to safety in the chat variants of Llama2 rather than the text generation variants, and working against that might be challenging for a first project, so I started with Llama-2-13b as a base. Preparing a Dataset I elected to generate a fine tuning dataset using fiction. Long term, that might not be a good fit. But I ve always wanted to understand how an LLM s tone is adjusted how you get an LLM to speak in a different voice. So much of fine tuning focuses on examples where a given prompt produces a particular result. I wanted to understand how to bring in data that wasn t structured as prompts. The Huggingface course actually gives an example of how to adjust a model set up for masked language modeling trained on wikitext to be better at predicting the vocabulary of movie reviews. There though, doing sample breaks in the dataset at movie review boundaries makes sense. There s another example of training an LLM from scratch based on a corpus of python code. Between these two examples, I figured out what I needed. It was relatively simple in retrospect: tokenize the whole mess, and treat everything as output. That is, compute loss on all the tokens. Long term, using fiction as a way to adjust how the model responds is likely to be the wrong starting point. However, it maximized focus on aspects of training I did not understand and allowed me to satisfy my curiosity. Rangling the Model I decided to actually try and add additional training to the model directly rather than building an adapter and fine tuning a small number of parameters. Partially this was because I had enough on my mind without understanding how LoRA adapters work. Partially, I wanted to gain an appreciation for the infrastructure complexity of AI training. I have enough of a cloud background that I ought to be able to work on distributed training. (As it turned out, using BitsAndBytes 8-bit optimizer, I was just able to fit my task onto a single GPU). I wasn t even sure that I could make a measurable difference in Llama-2-13b running 890,000 training tokens through a couple of training epochs. As it turned out I had nothing to fear on that front. Getting everything to work was more tricky than I expected. I didn t have an appreciation for exactly how memory intensive training was. The Transformers documentation points out that with typical parameters for mixed-precision training, it takes 18 bytes per model parameter. Using bfloat16 training and an 8-bit optimizer was enough to get things to fit. Of course then I got to play with convergence. My initial optimizer parameters caused the model to diverge, and before I knew it, my model had turned to NAN, and would only output newlines. Oops. But looking back over the logs, watching what happened to the loss, and looking at the math in the optimizer to understand how I ended up getting something that rounded to a divide by zero gave me a much better intuition for what was going on. The results. This time around I didn t do anything in the way of quantitative analysis of what I achieved. Empirically I definitely changed the tone of the model. The base Llama-2 model tends to steer away from sexual situations. It s relatively easy to get it to talk about affection and sometimes attraction. Unsurprisingly, given the design constraints, it takes a bit to get it to wonder into sexual situations. But if you hit it hard enough with your prompt, it will go there, and the results are depressing. At least for prompts I used, it tended to view sex fairly negatively. It tended to be less coherent than with other prompts. One inference managed to pop out in the middle of some text that wasn t hanging together well, Chapter 7 - Rape. With my training, I did manage to achieve my goal of getting the model to use more positive language and emotional signaling when talking about sexual situations. More importantly, I gained a practical understanding of many ways training can go wrong. A lot of articles I ve been reading about training make more sense. I have better intuition for why you might want to do training a certain way, or why mechanisms for countering some problem will be important. Future Activities:

comment count unavailable comments

Wouter Verhelst: Perl test suites in GitLab

I've been maintaining a number of Perl software packages recently. There's SReview, my video review and transcoding system of which I split off Media::Convert a while back; and as of about a year ago, I've also added PtLink, an RSS aggregator (with future plans for more than just that). All these come with extensive test suites which can help me ensure that things continue to work properly when I play with things; and all of these are hosted on salsa.debian.org, Debian's gitlab instance. Since we're there anyway, I configured GitLab CI/CD to run a full test suite of all the software, so that I can't forget, and also so that I know sooner rather than later when things start breaking. GitLab has extensive support for various test-related reports, and while it took a while to be able to enable all of them, I'm happy to report that today, my perl test suites generate all three possible reports. They are: Additionally, I also store the native perl Devel::Cover report as job artifacts, as they show some information that GitLab does not. It's important to recognize that not all data is useful. For instance, the JUnit report allows for a test name and for details of the test. However, the module that generates the JUnit report from TAP test suites does not make a distinction here; both the test name and the test details are reported as the same. Additionally, the time a test took is measured as the time between the end of the previous test and the end of the current one; there is no "start" marker in the TAP protocol. That being said, it's still useful to see all the available information in GitLab. And it's not even all that hard to do:
test:
  stage: test
  image: perl:latest
  coverage: '/^Total.* (\d+.\d+)$/'
  before_script:
    - cpanm ExtUtils::Depends Devel::Cover TAP::Harness::JUnit Devel::Cover::Report::Cobertura
    - cpanm --notest --installdeps .
    - perl Makefile.PL
  script:
    - cover -delete
    - HARNESS_PERL_SWITCHES='-MDevel::Cover' prove -v -l -s --harness TAP::Harness::JUnit
    - cover
    - cover -report cobertura
  artifacts:
    paths:
    - cover_db
    reports:
      junit: junit_output.xml
      coverage_report:
        path: cover_db/cobertura.xml
        coverage_format: cobertura
Let's expand on that a bit. The first three lines should be clear for anyone who's used GitLab CI/CD in the past. We create a job called test; we start it in the test stage, and we run it in the perl:latest docker image. Nothing spectacular here. The coverage line contains a regular expression. This is applied by GitLab to the output of the job; if it matches, then the first bracket match is extracted, and whatever that contains is assumed to contain the code coverage percentage for the code; it will be reported as such in the GitLab UI for the job that was ran, and graphs may be drawn to show how the coverage changes over time. Additionally, merge requests will show the delta in the code coverage, which may help deciding whether to accept a merge request. This regular expression will match on a line of that the cover program will generate on standard output. The before_script section installs various perl modules we'll need later on. First, we intall ExtUtils::Depends. My code uses ExtUtils::MakeMaker, which ExtUtils::Depends depends on (no pun intended); obviously, if your perl code doesn't use that, then you don't need to install it. The next three modules -- Devel::Cover, TAP::Harness::JUnit and Devel::Cover::Report::Cobertura are necessary for the reports, and you should include them if you want to copy what I'm doing. Next, we install declared dependencies, which is probably a good idea for you as well, and then we run perl Makefile.PL, which will generate the Makefile. If you don't use ExtUtils::MakeMaker, update that part to do what your build system uses. That should be fairly straightforward. You'll notice that we don't actually use the Makefile. This is because we only want to run the test suite, which in our case (since these are PurePerl modules) doesn't require us to build the software first. One might consider that this makes the call of perl Makefile.PL useless, but I think it's a useful test regardless; if that fails, then obviously we did something wrong and shouldn't even try to go further. The actual tests are run inside a script snippet, as is usual for GitLab. However we do a bit more than you would normally expect; this is required for the reports that we want to generate. Let's unpack what we do there:
cover -delete
This deletes any coverage database that might exist (e.g., due to caching or some such). We don't actually expect any coverage database, but it doesn't hurt.
HARNESS_PERL_SWITCHES='-MDevel::Cover'
This tells the TAP harness that we want it to load the Devel::Cover addon, which can generate code coverage statistics. It stores that in the cover_db directory, and allows you to generate all kinds of reports on the code coverage later (but we don't do that here, yet).
prove -v -l -s
Runs the actual test suite, with verbose output, shuffling (aka, randomizing) the test suite, and adding the lib directory to perl's include path. This works for us, again, because we don't actually need to compile anything; if you do, then -b (for blib) may be required. ExtUtils::MakeMaker creates a test target in its Makefile, and usually this is how you invoke the test suite. However, it's not the only way to do so, and indeed if you want to generate a JUnit XML report then you can't do that. Instead, in that case, you need to use the prove, so that you can tell it to load the TAP::Harness::JUnit module by way of the --harness option, which will then generate the JUnit XML report. By default, the JUnit XML report is generated in a file junit_output.xml. It's possible to customize the filename for this report, but GitLab doesn't care and neither do I, so I don't. Uploading the JUnit XML format tells GitLab which tests were run and Finally, we invoke the cover script twice to generate two coverage reports; once we generate the default report (which generates HTML files with detailed information on all the code that was triggered in your test suite), and once with the -report cobertura parameter, which generates the cobertura XML format. Once we've generated all our reports, we then need to upload them to GitLab in the right way. The native perl report, which is in the cover_db directory, is uploaded as a regular job artifact, which we can then look at through a web browser, and the two XML reports are uploaded in the correct way for their respective formats. All in all, I find that doing this makes it easier to understand how my code is tested, and why things go wrong when they do.

13 August 2023

Jonathan Dowland: Terrain base for 3D castle

terrain base for the castle
I designed and printed a "terrain" base for my 3D castle in OpenSCAD. The castle was the first thing I designed and printed on our (then new) office 3D printer. I use it as a test bed if I want to try something new, and this time I wanted to try procedurally generating a model. I've released the OpenSCAD source for the terrain generator under the name Zarchscape. mid 90s terrain generation
Lots of mid-90s games had very boxy floors Lots of mid-90s games had very boxy floors
Terrain generation, 90s-style. From [this article](https://web.archive.org/web/19990822085321/http://www.gamedesign.net/tutorials/pavlock/cool-ass-terrain/) Terrain generation, 90s-style. From this article
Back in the 90s I spent some time designing maps/levels/arenas for Quake and its sibling games (like Half-Life), mostly in the tool Worldcraft. A lot of beginner maps (including my own), ended up looking pretty boxy. I once stumbled across an article blog post that taught my a useful trick for making more natural-looking terrain. In brief: tessellate the floor region with triangle polygons, then randomly add some jitter to the z-dimension for their vertices. A really simple technique with fairly dramatic results. OpenSCAD Doing the same in OpenSCAD stretched me, and I think stretched OpenSCAD. It left me with some opinions which I'll try to write up in a future blog post. Final results
multicolour
I've generated and printed the result a couple of times, including an attempt a multicolour print. At home, I have a large spool of brown-coloured recycled PLA, and many small lengths of samples in various colours (that I picked up at Maker Faire Czech Republic last year), including some short lengths of green. My home printer is a Prusa Mini, and I cheaped out and didn't buy the filament runout sensor, which would detect when the current filament ran out and let me handle the situation gracefully. Instead, I added several colour change instructions to the g-code at various heights, hoping that whatever plastic I loaded for each layer was enough to get the print to the next colour change instruction. The results are a little mixed I think. I didn't catch the final layer running out in time (forgetting that the Bowden tube also means I need to catch it running out before the loading gear, a few inches earlier than the nozzle), so the final lush green colour ends prematurely. I've also got a fair bit of stringing to clean up. Finally, all these non-flat planes really show up some of the limitations of regular Slicing. It would be interesting to try this with a non-planar Slicer.

Next.

Previous.